1 |
NAME |
2 |
JSON::XS - JSON serialising/deserialising, done correctly and fast |
3 |
|
4 |
JSON::XS - 正しくて高速な JSON シリアライザ/デシリアライザ |
5 |
(http://fleur.hio.jp/perldoc/mix/lib/JSON/XS.html) |
6 |
|
7 |
SYNOPSIS |
8 |
use JSON::XS; |
9 |
|
10 |
# exported functions, they croak on error |
11 |
# and expect/generate UTF-8 |
12 |
|
13 |
$utf8_encoded_json_text = encode_json $perl_hash_or_arrayref; |
14 |
$perl_hash_or_arrayref = decode_json $utf8_encoded_json_text; |
15 |
|
16 |
# OO-interface |
17 |
|
18 |
$coder = JSON::XS->new->ascii->pretty->allow_nonref; |
19 |
$pretty_printed_unencoded = $coder->encode ($perl_scalar); |
20 |
$perl_scalar = $coder->decode ($unicode_json_text); |
21 |
|
22 |
# Note that JSON version 2.0 and above will automatically use JSON::XS |
23 |
# if available, at virtually no speed overhead either, so you should |
24 |
# be able to just: |
25 |
|
26 |
use JSON; |
27 |
|
28 |
# and do the same things, except that you have a pure-perl fallback now. |
29 |
|
30 |
DESCRIPTION |
31 |
This module converts Perl data structures to JSON and vice versa. Its |
32 |
primary goal is to be *correct* and its secondary goal is to be *fast*. |
33 |
To reach the latter goal it was written in C. |
34 |
|
35 |
Beginning with version 2.0 of the JSON module, when both JSON and |
36 |
JSON::XS are installed, then JSON will fall back on JSON::XS (this can |
37 |
be overridden) with no overhead due to emulation (by inheriting |
38 |
constructor and methods). If JSON::XS is not available, it will fall |
39 |
back to the compatible JSON::PP module as backend, so using JSON instead |
40 |
of JSON::XS gives you a portable JSON API that can be fast when you need |
41 |
and doesn't require a C compiler when that is a problem. |
42 |
|
43 |
As this is the n-th-something JSON module on CPAN, what was the reason |
44 |
to write yet another JSON module? While it seems there are many JSON |
45 |
modules, none of them correctly handle all corner cases, and in most |
46 |
cases their maintainers are unresponsive, gone missing, or not listening |
47 |
to bug reports for other reasons. |
48 |
|
49 |
See MAPPING, below, on how JSON::XS maps perl values to JSON values and |
50 |
vice versa. |
51 |
|
52 |
FEATURES |
53 |
* correct Unicode handling |
54 |
|
55 |
This module knows how to handle Unicode, documents how and when it |
56 |
does so, and even documents what "correct" means. |
57 |
|
58 |
* round-trip integrity |
59 |
|
60 |
When you serialise a perl data structure using only data types |
61 |
supported by JSON and Perl, the deserialised data structure is |
62 |
identical on the Perl level. (e.g. the string "2.0" doesn't suddenly |
63 |
become "2" just because it looks like a number). There *are* minor |
64 |
exceptions to this, read the MAPPING section below to learn about |
65 |
those. |
66 |
|
67 |
* strict checking of JSON correctness |
68 |
|
69 |
There is no guessing, no generating of illegal JSON texts by |
70 |
default, and only JSON is accepted as input by default (the latter |
71 |
is a security feature). |
72 |
|
73 |
* fast |
74 |
|
75 |
Compared to other JSON modules and other serialisers such as |
76 |
Storable, this module usually compares favourably in terms of speed, |
77 |
too. |
78 |
|
79 |
* simple to use |
80 |
|
81 |
This module has both a simple functional interface as well as an |
82 |
object oriented interface. |
83 |
|
84 |
* reasonably versatile output formats |
85 |
|
86 |
You can choose between the most compact guaranteed-single-line |
87 |
format possible (nice for simple line-based protocols), a pure-ASCII |
88 |
format (for when your transport is not 8-bit clean, still supports |
89 |
the whole Unicode range), or a pretty-printed format (for when you |
90 |
want to read that stuff). Or you can combine those features in |
91 |
whatever way you like. |
92 |
|
93 |
FUNCTIONAL INTERFACE |
94 |
The following convenience methods are provided by this module. They are |
95 |
exported by default: |
96 |
|
97 |
$json_text = encode_json $perl_scalar |
98 |
Converts the given Perl data structure to a UTF-8 encoded, binary |
99 |
string (that is, the string contains octets only). Croaks on error. |
100 |
|
101 |
This function call is functionally identical to: |
102 |
|
103 |
$json_text = JSON::XS->new->utf8->encode ($perl_scalar) |
104 |
|
105 |
Except being faster. |
106 |
|
107 |
$perl_scalar = decode_json $json_text |
108 |
The opposite of "encode_json": expects an UTF-8 (binary) string and |
109 |
tries to parse that as an UTF-8 encoded JSON text, returning the |
110 |
resulting reference. Croaks on error. |
111 |
|
112 |
This function call is functionally identical to: |
113 |
|
114 |
$perl_scalar = JSON::XS->new->utf8->decode ($json_text) |
115 |
|
116 |
Except being faster. |
117 |
|
118 |
A FEW NOTES ON UNICODE AND PERL |
119 |
Since this often leads to confusion, here are a few very clear words on |
120 |
how Unicode works in Perl, modulo bugs. |
121 |
|
122 |
1. Perl strings can store characters with ordinal values > 255. |
123 |
This enables you to store Unicode characters as single characters in |
124 |
a Perl string - very natural. |
125 |
|
126 |
2. Perl does *not* associate an encoding with your strings. |
127 |
... until you force it to, e.g. when matching it against a regex, or |
128 |
printing the scalar to a file, in which case Perl either interprets |
129 |
your string as locale-encoded text, octets/binary, or as Unicode, |
130 |
depending on various settings. In no case is an encoding stored |
131 |
together with your data, it is *use* that decides encoding, not any |
132 |
magical meta data. |
133 |
|
134 |
3. The internal utf-8 flag has no meaning with regards to the encoding |
135 |
of your string. |
136 |
Just ignore that flag unless you debug a Perl bug, a module written |
137 |
in XS or want to dive into the internals of perl. Otherwise it will |
138 |
only confuse you, as, despite the name, it says nothing about how |
139 |
your string is encoded. You can have Unicode strings with that flag |
140 |
set, with that flag clear, and you can have binary data with that |
141 |
flag set and that flag clear. Other possibilities exist, too. |
142 |
|
143 |
If you didn't know about that flag, just the better, pretend it |
144 |
doesn't exist. |
145 |
|
146 |
4. A "Unicode String" is simply a string where each character can be |
147 |
validly interpreted as a Unicode code point. |
148 |
If you have UTF-8 encoded data, it is no longer a Unicode string, |
149 |
but a Unicode string encoded in UTF-8, giving you a binary string. |
150 |
|
151 |
5. A string containing "high" (> 255) character values is *not* a UTF-8 |
152 |
string. |
153 |
It's a fact. Learn to live with it. |
154 |
|
155 |
I hope this helps :) |
156 |
|
157 |
OBJECT-ORIENTED INTERFACE |
158 |
The object oriented interface lets you configure your own encoding or |
159 |
decoding style, within the limits of supported formats. |
160 |
|
161 |
$json = new JSON::XS |
162 |
Creates a new JSON::XS object that can be used to de/encode JSON |
163 |
strings. All boolean flags described below are by default |
164 |
*disabled*. |
165 |
|
166 |
The mutators for flags all return the JSON object again and thus |
167 |
calls can be chained: |
168 |
|
169 |
my $json = JSON::XS->new->utf8->space_after->encode ({a => [1,2]}) |
170 |
=> {"a": [1, 2]} |
171 |
|
172 |
$json = $json->ascii ([$enable]) |
173 |
$enabled = $json->get_ascii |
174 |
If $enable is true (or missing), then the "encode" method will not |
175 |
generate characters outside the code range 0..127 (which is ASCII). |
176 |
Any Unicode characters outside that range will be escaped using |
177 |
either a single \uXXXX (BMP characters) or a double \uHHHH\uLLLLL |
178 |
escape sequence, as per RFC4627. The resulting encoded JSON text can |
179 |
be treated as a native Unicode string, an ascii-encoded, |
180 |
latin1-encoded or UTF-8 encoded string, or any other superset of |
181 |
ASCII. |
182 |
|
183 |
If $enable is false, then the "encode" method will not escape |
184 |
Unicode characters unless required by the JSON syntax or other |
185 |
flags. This results in a faster and more compact format. |
186 |
|
187 |
See also the section *ENCODING/CODESET FLAG NOTES* later in this |
188 |
document. |
189 |
|
190 |
The main use for this flag is to produce JSON texts that can be |
191 |
transmitted over a 7-bit channel, as the encoded JSON texts will not |
192 |
contain any 8 bit characters. |
193 |
|
194 |
JSON::XS->new->ascii (1)->encode ([chr 0x10401]) |
195 |
=> ["\ud801\udc01"] |
196 |
|
197 |
$json = $json->latin1 ([$enable]) |
198 |
$enabled = $json->get_latin1 |
199 |
If $enable is true (or missing), then the "encode" method will |
200 |
encode the resulting JSON text as latin1 (or iso-8859-1), escaping |
201 |
any characters outside the code range 0..255. The resulting string |
202 |
can be treated as a latin1-encoded JSON text or a native Unicode |
203 |
string. The "decode" method will not be affected in any way by this |
204 |
flag, as "decode" by default expects Unicode, which is a strict |
205 |
superset of latin1. |
206 |
|
207 |
If $enable is false, then the "encode" method will not escape |
208 |
Unicode characters unless required by the JSON syntax or other |
209 |
flags. |
210 |
|
211 |
See also the section *ENCODING/CODESET FLAG NOTES* later in this |
212 |
document. |
213 |
|
214 |
The main use for this flag is efficiently encoding binary data as |
215 |
JSON text, as most octets will not be escaped, resulting in a |
216 |
smaller encoded size. The disadvantage is that the resulting JSON |
217 |
text is encoded in latin1 (and must correctly be treated as such |
218 |
when storing and transferring), a rare encoding for JSON. It is |
219 |
therefore most useful when you want to store data structures known |
220 |
to contain binary data efficiently in files or databases, not when |
221 |
talking to other JSON encoders/decoders. |
222 |
|
223 |
JSON::XS->new->latin1->encode (["\x{89}\x{abc}"] |
224 |
=> ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not) |
225 |
|
226 |
$json = $json->utf8 ([$enable]) |
227 |
$enabled = $json->get_utf8 |
228 |
If $enable is true (or missing), then the "encode" method will |
229 |
encode the JSON result into UTF-8, as required by many protocols, |
230 |
while the "decode" method expects to be handled an UTF-8-encoded |
231 |
string. Please note that UTF-8-encoded strings do not contain any |
232 |
characters outside the range 0..255, they are thus useful for |
233 |
bytewise/binary I/O. In future versions, enabling this option might |
234 |
enable autodetection of the UTF-16 and UTF-32 encoding families, as |
235 |
described in RFC4627. |
236 |
|
237 |
If $enable is false, then the "encode" method will return the JSON |
238 |
string as a (non-encoded) Unicode string, while "decode" expects |
239 |
thus a Unicode string. Any decoding or encoding (e.g. to UTF-8 or |
240 |
UTF-16) needs to be done yourself, e.g. using the Encode module. |
241 |
|
242 |
See also the section *ENCODING/CODESET FLAG NOTES* later in this |
243 |
document. |
244 |
|
245 |
Example, output UTF-16BE-encoded JSON: |
246 |
|
247 |
use Encode; |
248 |
$jsontext = encode "UTF-16BE", JSON::XS->new->encode ($object); |
249 |
|
250 |
Example, decode UTF-32LE-encoded JSON: |
251 |
|
252 |
use Encode; |
253 |
$object = JSON::XS->new->decode (decode "UTF-32LE", $jsontext); |
254 |
|
255 |
$json = $json->pretty ([$enable]) |
256 |
This enables (or disables) all of the "indent", "space_before" and |
257 |
"space_after" (and in the future possibly more) flags in one call to |
258 |
generate the most readable (or most compact) form possible. |
259 |
|
260 |
Example, pretty-print some simple structure: |
261 |
|
262 |
my $json = JSON::XS->new->pretty(1)->encode ({a => [1,2]}) |
263 |
=> |
264 |
{ |
265 |
"a" : [ |
266 |
1, |
267 |
2 |
268 |
] |
269 |
} |
270 |
|
271 |
$json = $json->indent ([$enable]) |
272 |
$enabled = $json->get_indent |
273 |
If $enable is true (or missing), then the "encode" method will use a |
274 |
multiline format as output, putting every array member or |
275 |
object/hash key-value pair into its own line, indenting them |
276 |
properly. |
277 |
|
278 |
If $enable is false, no newlines or indenting will be produced, and |
279 |
the resulting JSON text is guaranteed not to contain any "newlines". |
280 |
|
281 |
This setting has no effect when decoding JSON texts. |
282 |
|
283 |
$json = $json->space_before ([$enable]) |
284 |
$enabled = $json->get_space_before |
285 |
If $enable is true (or missing), then the "encode" method will add |
286 |
an extra optional space before the ":" separating keys from values |
287 |
in JSON objects. |
288 |
|
289 |
If $enable is false, then the "encode" method will not add any extra |
290 |
space at those places. |
291 |
|
292 |
This setting has no effect when decoding JSON texts. You will also |
293 |
most likely combine this setting with "space_after". |
294 |
|
295 |
Example, space_before enabled, space_after and indent disabled: |
296 |
|
297 |
{"key" :"value"} |
298 |
|
299 |
$json = $json->space_after ([$enable]) |
300 |
$enabled = $json->get_space_after |
301 |
If $enable is true (or missing), then the "encode" method will add |
302 |
an extra optional space after the ":" separating keys from values in |
303 |
JSON objects and extra whitespace after the "," separating key-value |
304 |
pairs and array members. |
305 |
|
306 |
If $enable is false, then the "encode" method will not add any extra |
307 |
space at those places. |
308 |
|
309 |
This setting has no effect when decoding JSON texts. |
310 |
|
311 |
Example, space_before and indent disabled, space_after enabled: |
312 |
|
313 |
{"key": "value"} |
314 |
|
315 |
$json = $json->relaxed ([$enable]) |
316 |
$enabled = $json->get_relaxed |
317 |
If $enable is true (or missing), then "decode" will accept some |
318 |
extensions to normal JSON syntax (see below). "encode" will not be |
319 |
affected in anyway. *Be aware that this option makes you accept |
320 |
invalid JSON texts as if they were valid!*. I suggest only to use |
321 |
this option to parse application-specific files written by humans |
322 |
(configuration files, resource files etc.) |
323 |
|
324 |
If $enable is false (the default), then "decode" will only accept |
325 |
valid JSON texts. |
326 |
|
327 |
Currently accepted extensions are: |
328 |
|
329 |
* list items can have an end-comma |
330 |
|
331 |
JSON *separates* array elements and key-value pairs with commas. |
332 |
This can be annoying if you write JSON texts manually and want |
333 |
to be able to quickly append elements, so this extension accepts |
334 |
comma at the end of such items not just between them: |
335 |
|
336 |
[ |
337 |
1, |
338 |
2, <- this comma not normally allowed |
339 |
] |
340 |
{ |
341 |
"k1": "v1", |
342 |
"k2": "v2", <- this comma not normally allowed |
343 |
} |
344 |
|
345 |
* shell-style '#'-comments |
346 |
|
347 |
Whenever JSON allows whitespace, shell-style comments are |
348 |
additionally allowed. They are terminated by the first |
349 |
carriage-return or line-feed character, after which more |
350 |
white-space and comments are allowed. |
351 |
|
352 |
[ |
353 |
1, # this comment not allowed in JSON |
354 |
# neither this one... |
355 |
] |
356 |
|
357 |
$json = $json->canonical ([$enable]) |
358 |
$enabled = $json->get_canonical |
359 |
If $enable is true (or missing), then the "encode" method will |
360 |
output JSON objects by sorting their keys. This is adding a |
361 |
comparatively high overhead. |
362 |
|
363 |
If $enable is false, then the "encode" method will output key-value |
364 |
pairs in the order Perl stores them (which will likely change |
365 |
between runs of the same script, and can change even within the same |
366 |
run from 5.18 onwards). |
367 |
|
368 |
This option is useful if you want the same data structure to be |
369 |
encoded as the same JSON text (given the same overall settings). If |
370 |
it is disabled, the same hash might be encoded differently even if |
371 |
contains the same data, as key-value pairs have no inherent ordering |
372 |
in Perl. |
373 |
|
374 |
This setting has no effect when decoding JSON texts. |
375 |
|
376 |
This setting has currently no effect on tied hashes. |
377 |
|
378 |
$json = $json->allow_nonref ([$enable]) |
379 |
$enabled = $json->get_allow_nonref |
380 |
If $enable is true (or missing), then the "encode" method can |
381 |
convert a non-reference into its corresponding string, number or |
382 |
null JSON value, which is an extension to RFC4627. Likewise, |
383 |
"decode" will accept those JSON values instead of croaking. |
384 |
|
385 |
If $enable is false, then the "encode" method will croak if it isn't |
386 |
passed an arrayref or hashref, as JSON texts must either be an |
387 |
object or array. Likewise, "decode" will croak if given something |
388 |
that is not a JSON object or array. |
389 |
|
390 |
Example, encode a Perl scalar as JSON value with enabled |
391 |
"allow_nonref", resulting in an invalid JSON text: |
392 |
|
393 |
JSON::XS->new->allow_nonref->encode ("Hello, World!") |
394 |
=> "Hello, World!" |
395 |
|
396 |
$json = $json->allow_unknown ([$enable]) |
397 |
$enabled = $json->get_allow_unknown |
398 |
If $enable is true (or missing), then "encode" will *not* throw an |
399 |
exception when it encounters values it cannot represent in JSON (for |
400 |
example, filehandles) but instead will encode a JSON "null" value. |
401 |
Note that blessed objects are not included here and are handled |
402 |
separately by c<allow_nonref>. |
403 |
|
404 |
If $enable is false (the default), then "encode" will throw an |
405 |
exception when it encounters anything it cannot encode as JSON. |
406 |
|
407 |
This option does not affect "decode" in any way, and it is |
408 |
recommended to leave it off unless you know your communications |
409 |
partner. |
410 |
|
411 |
$json = $json->allow_blessed ([$enable]) |
412 |
$enabled = $json->get_allow_blessed |
413 |
See "OBJECT SERIALISATION" for details. |
414 |
|
415 |
If $enable is true (or missing), then the "encode" method will not |
416 |
barf when it encounters a blessed reference that it cannot convert |
417 |
otherwise. Instead, a JSON "null" value is encoded instead of the |
418 |
object. |
419 |
|
420 |
If $enable is false (the default), then "encode" will throw an |
421 |
exception when it encounters a blessed object that it cannot convert |
422 |
otherwise. |
423 |
|
424 |
This setting has no effect on "decode". |
425 |
|
426 |
$json = $json->convert_blessed ([$enable]) |
427 |
$enabled = $json->get_convert_blessed |
428 |
See "OBJECT SERIALISATION" for details. |
429 |
|
430 |
If $enable is true (or missing), then "encode", upon encountering a |
431 |
blessed object, will check for the availability of the "TO_JSON" |
432 |
method on the object's class. If found, it will be called in scalar |
433 |
context and the resulting scalar will be encoded instead of the |
434 |
object. |
435 |
|
436 |
The "TO_JSON" method may safely call die if it wants. If "TO_JSON" |
437 |
returns other blessed objects, those will be handled in the same |
438 |
way. "TO_JSON" must take care of not causing an endless recursion |
439 |
cycle (== crash) in this case. The name of "TO_JSON" was chosen |
440 |
because other methods called by the Perl core (== not by the user of |
441 |
the object) are usually in upper case letters and to avoid |
442 |
collisions with any "to_json" function or method. |
443 |
|
444 |
If $enable is false (the default), then "encode" will not consider |
445 |
this type of conversion. |
446 |
|
447 |
This setting has no effect on "decode". |
448 |
|
449 |
$json = $json->allow_tags ([$enable]) |
450 |
$enabled = $json->allow_tags |
451 |
See "OBJECT SERIALISATION" for details. |
452 |
|
453 |
If $enable is true (or missing), then "encode", upon encountering a |
454 |
blessed object, will check for the availability of the "FREEZE" |
455 |
method on the object's class. If found, it will be used to serialise |
456 |
the object into a nonstandard tagged JSON value (that JSON decoders |
457 |
cannot decode). |
458 |
|
459 |
It also causes "decode" to parse such tagged JSON values and |
460 |
deserialise them via a call to the "THAW" method. |
461 |
|
462 |
If $enable is false (the default), then "encode" will not consider |
463 |
this type of conversion, and tagged JSON values will cause a parse |
464 |
error in "decode", as if tags were not part of the grammar. |
465 |
|
466 |
$json = $json->filter_json_object ([$coderef->($hashref)]) |
467 |
When $coderef is specified, it will be called from "decode" each |
468 |
time it decodes a JSON object. The only argument is a reference to |
469 |
the newly-created hash. If the code references returns a single |
470 |
scalar (which need not be a reference), this value (i.e. a copy of |
471 |
that scalar to avoid aliasing) is inserted into the deserialised |
472 |
data structure. If it returns an empty list (NOTE: *not* "undef", |
473 |
which is a valid scalar), the original deserialised hash will be |
474 |
inserted. This setting can slow down decoding considerably. |
475 |
|
476 |
When $coderef is omitted or undefined, any existing callback will be |
477 |
removed and "decode" will not change the deserialised hash in any |
478 |
way. |
479 |
|
480 |
Example, convert all JSON objects into the integer 5: |
481 |
|
482 |
my $js = JSON::XS->new->filter_json_object (sub { 5 }); |
483 |
# returns [5] |
484 |
$js->decode ('[{}]') |
485 |
# throw an exception because allow_nonref is not enabled |
486 |
# so a lone 5 is not allowed. |
487 |
$js->decode ('{"a":1, "b":2}'); |
488 |
|
489 |
$json = $json->filter_json_single_key_object ($key [=> |
490 |
$coderef->($value)]) |
491 |
Works remotely similar to "filter_json_object", but is only called |
492 |
for JSON objects having a single key named $key. |
493 |
|
494 |
This $coderef is called before the one specified via |
495 |
"filter_json_object", if any. It gets passed the single value in the |
496 |
JSON object. If it returns a single value, it will be inserted into |
497 |
the data structure. If it returns nothing (not even "undef" but the |
498 |
empty list), the callback from "filter_json_object" will be called |
499 |
next, as if no single-key callback were specified. |
500 |
|
501 |
If $coderef is omitted or undefined, the corresponding callback will |
502 |
be disabled. There can only ever be one callback for a given key. |
503 |
|
504 |
As this callback gets called less often then the |
505 |
"filter_json_object" one, decoding speed will not usually suffer as |
506 |
much. Therefore, single-key objects make excellent targets to |
507 |
serialise Perl objects into, especially as single-key JSON objects |
508 |
are as close to the type-tagged value concept as JSON gets (it's |
509 |
basically an ID/VALUE tuple). Of course, JSON does not support this |
510 |
in any way, so you need to make sure your data never looks like a |
511 |
serialised Perl hash. |
512 |
|
513 |
Typical names for the single object key are "__class_whatever__", or |
514 |
"$__dollars_are_rarely_used__$" or "}ugly_brace_placement", or even |
515 |
things like "__class_md5sum(classname)__", to reduce the risk of |
516 |
clashing with real hashes. |
517 |
|
518 |
Example, decode JSON objects of the form "{ "__widget__" => <id> }" |
519 |
into the corresponding $WIDGET{<id>} object: |
520 |
|
521 |
# return whatever is in $WIDGET{5}: |
522 |
JSON::XS |
523 |
->new |
524 |
->filter_json_single_key_object (__widget__ => sub { |
525 |
$WIDGET{ $_[0] } |
526 |
}) |
527 |
->decode ('{"__widget__": 5') |
528 |
|
529 |
# this can be used with a TO_JSON method in some "widget" class |
530 |
# for serialisation to json: |
531 |
sub WidgetBase::TO_JSON { |
532 |
my ($self) = @_; |
533 |
|
534 |
unless ($self->{id}) { |
535 |
$self->{id} = ..get..some..id..; |
536 |
$WIDGET{$self->{id}} = $self; |
537 |
} |
538 |
|
539 |
{ __widget__ => $self->{id} } |
540 |
} |
541 |
|
542 |
$json = $json->shrink ([$enable]) |
543 |
$enabled = $json->get_shrink |
544 |
Perl usually over-allocates memory a bit when allocating space for |
545 |
strings. This flag optionally resizes strings generated by either |
546 |
"encode" or "decode" to their minimum size possible. This can save |
547 |
memory when your JSON texts are either very very long or you have |
548 |
many short strings. It will also try to downgrade any strings to |
549 |
octet-form if possible: perl stores strings internally either in an |
550 |
encoding called UTF-X or in octet-form. The latter cannot store |
551 |
everything but uses less space in general (and some buggy Perl or C |
552 |
code might even rely on that internal representation being used). |
553 |
|
554 |
The actual definition of what shrink does might change in future |
555 |
versions, but it will always try to save space at the expense of |
556 |
time. |
557 |
|
558 |
If $enable is true (or missing), the string returned by "encode" |
559 |
will be shrunk-to-fit, while all strings generated by "decode" will |
560 |
also be shrunk-to-fit. |
561 |
|
562 |
If $enable is false, then the normal perl allocation algorithms are |
563 |
used. If you work with your data, then this is likely to be faster. |
564 |
|
565 |
In the future, this setting might control other things, such as |
566 |
converting strings that look like integers or floats into integers |
567 |
or floats internally (there is no difference on the Perl level), |
568 |
saving space. |
569 |
|
570 |
$json = $json->max_depth ([$maximum_nesting_depth]) |
571 |
$max_depth = $json->get_max_depth |
572 |
Sets the maximum nesting level (default 512) accepted while encoding |
573 |
or decoding. If a higher nesting level is detected in JSON text or a |
574 |
Perl data structure, then the encoder and decoder will stop and |
575 |
croak at that point. |
576 |
|
577 |
Nesting level is defined by number of hash- or arrayrefs that the |
578 |
encoder needs to traverse to reach a given point or the number of |
579 |
"{" or "[" characters without their matching closing parenthesis |
580 |
crossed to reach a given character in a string. |
581 |
|
582 |
Setting the maximum depth to one disallows any nesting, so that |
583 |
ensures that the object is only a single hash/object or array. |
584 |
|
585 |
If no argument is given, the highest possible setting will be used, |
586 |
which is rarely useful. |
587 |
|
588 |
Note that nesting is implemented by recursion in C. The default |
589 |
value has been chosen to be as large as typical operating systems |
590 |
allow without crashing. |
591 |
|
592 |
See SECURITY CONSIDERATIONS, below, for more info on why this is |
593 |
useful. |
594 |
|
595 |
$json = $json->max_size ([$maximum_string_size]) |
596 |
$max_size = $json->get_max_size |
597 |
Set the maximum length a JSON text may have (in bytes) where |
598 |
decoding is being attempted. The default is 0, meaning no limit. |
599 |
When "decode" is called on a string that is longer then this many |
600 |
bytes, it will not attempt to decode the string but throw an |
601 |
exception. This setting has no effect on "encode" (yet). |
602 |
|
603 |
If no argument is given, the limit check will be deactivated (same |
604 |
as when 0 is specified). |
605 |
|
606 |
See SECURITY CONSIDERATIONS, below, for more info on why this is |
607 |
useful. |
608 |
|
609 |
$json_text = $json->encode ($perl_scalar) |
610 |
Converts the given Perl value or data structure to its JSON |
611 |
representation. Croaks on error. |
612 |
|
613 |
$perl_scalar = $json->decode ($json_text) |
614 |
The opposite of "encode": expects a JSON text and tries to parse it, |
615 |
returning the resulting simple scalar or reference. Croaks on error. |
616 |
|
617 |
($perl_scalar, $characters) = $json->decode_prefix ($json_text) |
618 |
This works like the "decode" method, but instead of raising an |
619 |
exception when there is trailing garbage after the first JSON |
620 |
object, it will silently stop parsing there and return the number of |
621 |
characters consumed so far. |
622 |
|
623 |
This is useful if your JSON texts are not delimited by an outer |
624 |
protocol and you need to know where the JSON text ends. |
625 |
|
626 |
JSON::XS->new->decode_prefix ("[1] the tail") |
627 |
=> ([], 3) |
628 |
|
629 |
INCREMENTAL PARSING |
630 |
In some cases, there is the need for incremental parsing of JSON texts. |
631 |
While this module always has to keep both JSON text and resulting Perl |
632 |
data structure in memory at one time, it does allow you to parse a JSON |
633 |
stream incrementally. It does so by accumulating text until it has a |
634 |
full JSON object, which it then can decode. This process is similar to |
635 |
using "decode_prefix" to see if a full JSON object is available, but is |
636 |
much more efficient (and can be implemented with a minimum of method |
637 |
calls). |
638 |
|
639 |
JSON::XS will only attempt to parse the JSON text once it is sure it has |
640 |
enough text to get a decisive result, using a very simple but truly |
641 |
incremental parser. This means that it sometimes won't stop as early as |
642 |
the full parser, for example, it doesn't detect mismatched parentheses. |
643 |
The only thing it guarantees is that it starts decoding as soon as a |
644 |
syntactically valid JSON text has been seen. This means you need to set |
645 |
resource limits (e.g. "max_size") to ensure the parser will stop parsing |
646 |
in the presence if syntax errors. |
647 |
|
648 |
The following methods implement this incremental parser. |
649 |
|
650 |
[void, scalar or list context] = $json->incr_parse ([$string]) |
651 |
This is the central parsing function. It can both append new text |
652 |
and extract objects from the stream accumulated so far (both of |
653 |
these functions are optional). |
654 |
|
655 |
If $string is given, then this string is appended to the already |
656 |
existing JSON fragment stored in the $json object. |
657 |
|
658 |
After that, if the function is called in void context, it will |
659 |
simply return without doing anything further. This can be used to |
660 |
add more text in as many chunks as you want. |
661 |
|
662 |
If the method is called in scalar context, then it will try to |
663 |
extract exactly *one* JSON object. If that is successful, it will |
664 |
return this object, otherwise it will return "undef". If there is a |
665 |
parse error, this method will croak just as "decode" would do (one |
666 |
can then use "incr_skip" to skip the erroneous part). This is the |
667 |
most common way of using the method. |
668 |
|
669 |
And finally, in list context, it will try to extract as many objects |
670 |
from the stream as it can find and return them, or the empty list |
671 |
otherwise. For this to work, there must be no separators between the |
672 |
JSON objects or arrays, instead they must be concatenated |
673 |
back-to-back. If an error occurs, an exception will be raised as in |
674 |
the scalar context case. Note that in this case, any |
675 |
previously-parsed JSON texts will be lost. |
676 |
|
677 |
Example: Parse some JSON arrays/objects in a given string and return |
678 |
them. |
679 |
|
680 |
my @objs = JSON::XS->new->incr_parse ("[5][7][1,2]"); |
681 |
|
682 |
$lvalue_string = $json->incr_text |
683 |
This method returns the currently stored JSON fragment as an lvalue, |
684 |
that is, you can manipulate it. This *only* works when a preceding |
685 |
call to "incr_parse" in *scalar context* successfully returned an |
686 |
object. Under all other circumstances you must not call this |
687 |
function (I mean it. although in simple tests it might actually |
688 |
work, it *will* fail under real world conditions). As a special |
689 |
exception, you can also call this method before having parsed |
690 |
anything. |
691 |
|
692 |
This function is useful in two cases: a) finding the trailing text |
693 |
after a JSON object or b) parsing multiple JSON objects separated by |
694 |
non-JSON text (such as commas). |
695 |
|
696 |
$json->incr_skip |
697 |
This will reset the state of the incremental parser and will remove |
698 |
the parsed text from the input buffer so far. This is useful after |
699 |
"incr_parse" died, in which case the input buffer and incremental |
700 |
parser state is left unchanged, to skip the text parsed so far and |
701 |
to reset the parse state. |
702 |
|
703 |
The difference to "incr_reset" is that only text until the parse |
704 |
error occurred is removed. |
705 |
|
706 |
$json->incr_reset |
707 |
This completely resets the incremental parser, that is, after this |
708 |
call, it will be as if the parser had never parsed anything. |
709 |
|
710 |
This is useful if you want to repeatedly parse JSON objects and want |
711 |
to ignore any trailing data, which means you have to reset the |
712 |
parser after each successful decode. |
713 |
|
714 |
LIMITATIONS |
715 |
All options that affect decoding are supported, except "allow_nonref". |
716 |
The reason for this is that it cannot be made to work sensibly: JSON |
717 |
objects and arrays are self-delimited, i.e. you can concatenate them |
718 |
back to back and still decode them perfectly. This does not hold true |
719 |
for JSON numbers, however. |
720 |
|
721 |
For example, is the string 1 a single JSON number, or is it simply the |
722 |
start of 12? Or is 12 a single JSON number, or the concatenation of 1 |
723 |
and 2? In neither case you can tell, and this is why JSON::XS takes the |
724 |
conservative route and disallows this case. |
725 |
|
726 |
EXAMPLES |
727 |
Some examples will make all this clearer. First, a simple example that |
728 |
works similarly to "decode_prefix": We want to decode the JSON object at |
729 |
the start of a string and identify the portion after the JSON object: |
730 |
|
731 |
my $text = "[1,2,3] hello"; |
732 |
|
733 |
my $json = new JSON::XS; |
734 |
|
735 |
my $obj = $json->incr_parse ($text) |
736 |
or die "expected JSON object or array at beginning of string"; |
737 |
|
738 |
my $tail = $json->incr_text; |
739 |
# $tail now contains " hello" |
740 |
|
741 |
Easy, isn't it? |
742 |
|
743 |
Now for a more complicated example: Imagine a hypothetical protocol |
744 |
where you read some requests from a TCP stream, and each request is a |
745 |
JSON array, without any separation between them (in fact, it is often |
746 |
useful to use newlines as "separators", as these get interpreted as |
747 |
whitespace at the start of the JSON text, which makes it possible to |
748 |
test said protocol with "telnet"...). |
749 |
|
750 |
Here is how you'd do it (it is trivial to write this in an event-based |
751 |
manner): |
752 |
|
753 |
my $json = new JSON::XS; |
754 |
|
755 |
# read some data from the socket |
756 |
while (sysread $socket, my $buf, 4096) { |
757 |
|
758 |
# split and decode as many requests as possible |
759 |
for my $request ($json->incr_parse ($buf)) { |
760 |
# act on the $request |
761 |
} |
762 |
} |
763 |
|
764 |
Another complicated example: Assume you have a string with JSON objects |
765 |
or arrays, all separated by (optional) comma characters (e.g. "[1],[2], |
766 |
[3]"). To parse them, we have to skip the commas between the JSON texts, |
767 |
and here is where the lvalue-ness of "incr_text" comes in useful: |
768 |
|
769 |
my $text = "[1],[2], [3]"; |
770 |
my $json = new JSON::XS; |
771 |
|
772 |
# void context, so no parsing done |
773 |
$json->incr_parse ($text); |
774 |
|
775 |
# now extract as many objects as possible. note the |
776 |
# use of scalar context so incr_text can be called. |
777 |
while (my $obj = $json->incr_parse) { |
778 |
# do something with $obj |
779 |
|
780 |
# now skip the optional comma |
781 |
$json->incr_text =~ s/^ \s* , //x; |
782 |
} |
783 |
|
784 |
Now lets go for a very complex example: Assume that you have a gigantic |
785 |
JSON array-of-objects, many gigabytes in size, and you want to parse it, |
786 |
but you cannot load it into memory fully (this has actually happened in |
787 |
the real world :). |
788 |
|
789 |
Well, you lost, you have to implement your own JSON parser. But JSON::XS |
790 |
can still help you: You implement a (very simple) array parser and let |
791 |
JSON decode the array elements, which are all full JSON objects on their |
792 |
own (this wouldn't work if the array elements could be JSON numbers, for |
793 |
example): |
794 |
|
795 |
my $json = new JSON::XS; |
796 |
|
797 |
# open the monster |
798 |
open my $fh, "<bigfile.json" |
799 |
or die "bigfile: $!"; |
800 |
|
801 |
# first parse the initial "[" |
802 |
for (;;) { |
803 |
sysread $fh, my $buf, 65536 |
804 |
or die "read error: $!"; |
805 |
$json->incr_parse ($buf); # void context, so no parsing |
806 |
|
807 |
# Exit the loop once we found and removed(!) the initial "[". |
808 |
# In essence, we are (ab-)using the $json object as a simple scalar |
809 |
# we append data to. |
810 |
last if $json->incr_text =~ s/^ \s* \[ //x; |
811 |
} |
812 |
|
813 |
# now we have the skipped the initial "[", so continue |
814 |
# parsing all the elements. |
815 |
for (;;) { |
816 |
# in this loop we read data until we got a single JSON object |
817 |
for (;;) { |
818 |
if (my $obj = $json->incr_parse) { |
819 |
# do something with $obj |
820 |
last; |
821 |
} |
822 |
|
823 |
# add more data |
824 |
sysread $fh, my $buf, 65536 |
825 |
or die "read error: $!"; |
826 |
$json->incr_parse ($buf); # void context, so no parsing |
827 |
} |
828 |
|
829 |
# in this loop we read data until we either found and parsed the |
830 |
# separating "," between elements, or the final "]" |
831 |
for (;;) { |
832 |
# first skip whitespace |
833 |
$json->incr_text =~ s/^\s*//; |
834 |
|
835 |
# if we find "]", we are done |
836 |
if ($json->incr_text =~ s/^\]//) { |
837 |
print "finished.\n"; |
838 |
exit; |
839 |
} |
840 |
|
841 |
# if we find ",", we can continue with the next element |
842 |
if ($json->incr_text =~ s/^,//) { |
843 |
last; |
844 |
} |
845 |
|
846 |
# if we find anything else, we have a parse error! |
847 |
if (length $json->incr_text) { |
848 |
die "parse error near ", $json->incr_text; |
849 |
} |
850 |
|
851 |
# else add more data |
852 |
sysread $fh, my $buf, 65536 |
853 |
or die "read error: $!"; |
854 |
$json->incr_parse ($buf); # void context, so no parsing |
855 |
} |
856 |
|
857 |
This is a complex example, but most of the complexity comes from the |
858 |
fact that we are trying to be correct (bear with me if I am wrong, I |
859 |
never ran the above example :). |
860 |
|
861 |
MAPPING |
862 |
This section describes how JSON::XS maps Perl values to JSON values and |
863 |
vice versa. These mappings are designed to "do the right thing" in most |
864 |
circumstances automatically, preserving round-tripping characteristics |
865 |
(what you put in comes out as something equivalent). |
866 |
|
867 |
For the more enlightened: note that in the following descriptions, |
868 |
lowercase *perl* refers to the Perl interpreter, while uppercase *Perl* |
869 |
refers to the abstract Perl language itself. |
870 |
|
871 |
JSON -> PERL |
872 |
object |
873 |
A JSON object becomes a reference to a hash in Perl. No ordering of |
874 |
object keys is preserved (JSON does not preserve object key ordering |
875 |
itself). |
876 |
|
877 |
array |
878 |
A JSON array becomes a reference to an array in Perl. |
879 |
|
880 |
string |
881 |
A JSON string becomes a string scalar in Perl - Unicode codepoints |
882 |
in JSON are represented by the same codepoints in the Perl string, |
883 |
so no manual decoding is necessary. |
884 |
|
885 |
number |
886 |
A JSON number becomes either an integer, numeric (floating point) or |
887 |
string scalar in perl, depending on its range and any fractional |
888 |
parts. On the Perl level, there is no difference between those as |
889 |
Perl handles all the conversion details, but an integer may take |
890 |
slightly less memory and might represent more values exactly than |
891 |
floating point numbers. |
892 |
|
893 |
If the number consists of digits only, JSON::XS will try to |
894 |
represent it as an integer value. If that fails, it will try to |
895 |
represent it as a numeric (floating point) value if that is possible |
896 |
without loss of precision. Otherwise it will preserve the number as |
897 |
a string value (in which case you lose roundtripping ability, as the |
898 |
JSON number will be re-encoded to a JSON string). |
899 |
|
900 |
Numbers containing a fractional or exponential part will always be |
901 |
represented as numeric (floating point) values, possibly at a loss |
902 |
of precision (in which case you might lose perfect roundtripping |
903 |
ability, but the JSON number will still be re-encoded as a JSON |
904 |
number). |
905 |
|
906 |
Note that precision is not accuracy - binary floating point values |
907 |
cannot represent most decimal fractions exactly, and when converting |
908 |
from and to floating point, JSON::XS only guarantees precision up to |
909 |
but not including the least significant bit. |
910 |
|
911 |
true, false |
912 |
These JSON atoms become "Types::Serialiser::true" and |
913 |
"Types::Serialiser::false", respectively. They are overloaded to act |
914 |
almost exactly like the numbers 1 and 0. You can check whether a |
915 |
scalar is a JSON boolean by using the "Types::Serialiser::is_bool" |
916 |
function (after "use Types::Serialier", of course). |
917 |
|
918 |
null |
919 |
A JSON null atom becomes "undef" in Perl. |
920 |
|
921 |
shell-style comments ("# *text*") |
922 |
As a nonstandard extension to the JSON syntax that is enabled by the |
923 |
"relaxed" setting, shell-style comments are allowed. They can start |
924 |
anywhere outside strings and go till the end of the line. |
925 |
|
926 |
tagged values ("(*tag*)*value*"). |
927 |
Another nonstandard extension to the JSON syntax, enabled with the |
928 |
"allow_tags" setting, are tagged values. In this implementation, the |
929 |
*tag* must be a perl package/class name encoded as a JSON string, |
930 |
and the *value* must be a JSON array encoding optional constructor |
931 |
arguments. |
932 |
|
933 |
See "OBJECT SERIALISATION", below, for details. |
934 |
|
935 |
PERL -> JSON |
936 |
The mapping from Perl to JSON is slightly more difficult, as Perl is a |
937 |
truly typeless language, so we can only guess which JSON type is meant |
938 |
by a Perl value. |
939 |
|
940 |
hash references |
941 |
Perl hash references become JSON objects. As there is no inherent |
942 |
ordering in hash keys (or JSON objects), they will usually be |
943 |
encoded in a pseudo-random order. JSON::XS can optionally sort the |
944 |
hash keys (determined by the *canonical* flag), so the same |
945 |
datastructure will serialise to the same JSON text (given same |
946 |
settings and version of JSON::XS), but this incurs a runtime |
947 |
overhead and is only rarely useful, e.g. when you want to compare |
948 |
some JSON text against another for equality. |
949 |
|
950 |
array references |
951 |
Perl array references become JSON arrays. |
952 |
|
953 |
other references |
954 |
Other unblessed references are generally not allowed and will cause |
955 |
an exception to be thrown, except for references to the integers 0 |
956 |
and 1, which get turned into "false" and "true" atoms in JSON. |
957 |
|
958 |
Since "JSON::XS" uses the boolean model from Types::Serialiser, you |
959 |
can also "use Types::Serialiser" and then use |
960 |
"Types::Serialiser::false" and "Types::Serialiser::true" to improve |
961 |
readability. |
962 |
|
963 |
use Types::Serialiser; |
964 |
encode_json [\0, Types::Serialiser::true] # yields [false,true] |
965 |
|
966 |
Types::Serialiser::true, Types::Serialiser::false |
967 |
These special values from the Types::Serialiser module become JSON |
968 |
true and JSON false values, respectively. You can also use "\1" and |
969 |
"\0" directly if you want. |
970 |
|
971 |
blessed objects |
972 |
Blessed objects are not directly representable in JSON, but |
973 |
"JSON::XS" allows various ways of handling objects. See "OBJECT |
974 |
SERIALISATION", below, for details. |
975 |
|
976 |
simple scalars |
977 |
Simple Perl scalars (any scalar that is not a reference) are the |
978 |
most difficult objects to encode: JSON::XS will encode undefined |
979 |
scalars as JSON "null" values, scalars that have last been used in a |
980 |
string context before encoding as JSON strings, and anything else as |
981 |
number value: |
982 |
|
983 |
# dump as number |
984 |
encode_json [2] # yields [2] |
985 |
encode_json [-3.0e17] # yields [-3e+17] |
986 |
my $value = 5; encode_json [$value] # yields [5] |
987 |
|
988 |
# used as string, so dump as string |
989 |
print $value; |
990 |
encode_json [$value] # yields ["5"] |
991 |
|
992 |
# undef becomes null |
993 |
encode_json [undef] # yields [null] |
994 |
|
995 |
You can force the type to be a JSON string by stringifying it: |
996 |
|
997 |
my $x = 3.1; # some variable containing a number |
998 |
"$x"; # stringified |
999 |
$x .= ""; # another, more awkward way to stringify |
1000 |
print $x; # perl does it for you, too, quite often |
1001 |
|
1002 |
You can force the type to be a JSON number by numifying it: |
1003 |
|
1004 |
my $x = "3"; # some variable containing a string |
1005 |
$x += 0; # numify it, ensuring it will be dumped as a number |
1006 |
$x *= 1; # same thing, the choice is yours. |
1007 |
|
1008 |
You can not currently force the type in other, less obscure, ways. |
1009 |
Tell me if you need this capability (but don't forget to explain why |
1010 |
it's needed :). |
1011 |
|
1012 |
Note that numerical precision has the same meaning as under Perl (so |
1013 |
binary to decimal conversion follows the same rules as in Perl, |
1014 |
which can differ to other languages). Also, your perl interpreter |
1015 |
might expose extensions to the floating point numbers of your |
1016 |
platform, such as infinities or NaN's - these cannot be represented |
1017 |
in JSON, and it is an error to pass those in. |
1018 |
|
1019 |
OBJECT SERIALISATION |
1020 |
As JSON cannot directly represent Perl objects, you have to choose |
1021 |
between a pure JSON representation (without the ability to deserialise |
1022 |
the object automatically again), and a nonstandard extension to the JSON |
1023 |
syntax, tagged values. |
1024 |
|
1025 |
SERIALISATION |
1026 |
What happens when "JSON::XS" encounters a Perl object depends on the |
1027 |
"allow_blessed", "convert_blessed" and "allow_tags" settings, which are |
1028 |
used in this order: |
1029 |
|
1030 |
1. "allow_tags" is enabled and the object has a "FREEZE" method. |
1031 |
In this case, "JSON::XS" uses the Types::Serialiser object |
1032 |
serialisation protocol to create a tagged JSON value, using a |
1033 |
nonstandard extension to the JSON syntax. |
1034 |
|
1035 |
This works by invoking the "FREEZE" method on the object, with the |
1036 |
first argument being the object to serialise, and the second |
1037 |
argument being the constant string "JSON" to distinguish it from |
1038 |
other serialisers. |
1039 |
|
1040 |
The "FREEZE" method can return any number of values (i.e. zero or |
1041 |
more). These values and the paclkage/classname of the object will |
1042 |
then be encoded as a tagged JSON value in the following format: |
1043 |
|
1044 |
("classname")[FREEZE return values...] |
1045 |
|
1046 |
For example, the hypothetical "My::Object" "FREEZE" method might use |
1047 |
the objects "type" and "id" members to encode the object: |
1048 |
|
1049 |
sub My::Object::FREEZE { |
1050 |
my ($self, $serialiser) = @_; |
1051 |
|
1052 |
($self->{type}, $self->{id}) |
1053 |
} |
1054 |
|
1055 |
2. "convert_blessed" is enabled and the object has a "TO_JSON" method. |
1056 |
In this case, the "TO_JSON" method of the object is invoked in |
1057 |
scalar context. It must return a single scalar that can be directly |
1058 |
encoded into JSON. This scalar replaces the object in the JSON text. |
1059 |
|
1060 |
For example, the following "TO_JSON" method will convert all URI |
1061 |
objects to JSON strings when serialised. The fatc that these values |
1062 |
originally were URI objects is lost. |
1063 |
|
1064 |
sub URI::TO_JSON { |
1065 |
my ($uri) = @_; |
1066 |
$uri->as_string |
1067 |
} |
1068 |
|
1069 |
3. "allow_blessed" is enabled. |
1070 |
The object will be serialised as a JSON null value. |
1071 |
|
1072 |
4. none of the above |
1073 |
If none of the settings are enabled or the respective methods are |
1074 |
missing, "JSON::XS" throws an exception. |
1075 |
|
1076 |
DESERIALISATION |
1077 |
For deserialisation there are only two cases to consider: either |
1078 |
nonstandard tagging was used, in which case "allow_tags" decides, or |
1079 |
objects cannot be automatically be deserialised, in which case you can |
1080 |
use postprocessing or the "filter_json_object" or |
1081 |
"filter_json_single_key_object" callbacks to get some real objects our |
1082 |
of your JSON. |
1083 |
|
1084 |
This section only considers the tagged value case: I a tagged JSON |
1085 |
object is encountered during decoding and "allow_tags" is disabled, a |
1086 |
parse error will result (as if tagged values were not part of the |
1087 |
grammar). |
1088 |
|
1089 |
If "allow_tags" is enabled, "JSON::XS" will look up the "THAW" method of |
1090 |
the package/classname used during serialisation (it will not attempt to |
1091 |
load the package as a Perl module). If there is no such method, the |
1092 |
decoding will fail with an error. |
1093 |
|
1094 |
Otherwise, the "THAW" method is invoked with the classname as first |
1095 |
argument, the constant string "JSON" as second argument, and all the |
1096 |
values from the JSON array (the values originally returned by the |
1097 |
"FREEZE" method) as remaining arguments. |
1098 |
|
1099 |
The method must then return the object. While technically you can return |
1100 |
any Perl scalar, you might have to enable the "enable_nonref" setting to |
1101 |
make that work in all cases, so better return an actual blessed |
1102 |
reference. |
1103 |
|
1104 |
As an example, let's implement a "THAW" function that regenerates the |
1105 |
"My::Object" from the "FREEZE" example earlier: |
1106 |
|
1107 |
sub My::Object::THAW { |
1108 |
my ($class, $serialiser, $type, $id) = @_; |
1109 |
|
1110 |
$class->new (type => $type, id => $id) |
1111 |
} |
1112 |
|
1113 |
ENCODING/CODESET FLAG NOTES |
1114 |
The interested reader might have seen a number of flags that signify |
1115 |
encodings or codesets - "utf8", "latin1" and "ascii". There seems to be |
1116 |
some confusion on what these do, so here is a short comparison: |
1117 |
|
1118 |
"utf8" controls whether the JSON text created by "encode" (and expected |
1119 |
by "decode") is UTF-8 encoded or not, while "latin1" and "ascii" only |
1120 |
control whether "encode" escapes character values outside their |
1121 |
respective codeset range. Neither of these flags conflict with each |
1122 |
other, although some combinations make less sense than others. |
1123 |
|
1124 |
Care has been taken to make all flags symmetrical with respect to |
1125 |
"encode" and "decode", that is, texts encoded with any combination of |
1126 |
these flag values will be correctly decoded when the same flags are used |
1127 |
- in general, if you use different flag settings while encoding vs. when |
1128 |
decoding you likely have a bug somewhere. |
1129 |
|
1130 |
Below comes a verbose discussion of these flags. Note that a "codeset" |
1131 |
is simply an abstract set of character-codepoint pairs, while an |
1132 |
encoding takes those codepoint numbers and *encodes* them, in our case |
1133 |
into octets. Unicode is (among other things) a codeset, UTF-8 is an |
1134 |
encoding, and ISO-8859-1 (= latin 1) and ASCII are both codesets *and* |
1135 |
encodings at the same time, which can be confusing. |
1136 |
|
1137 |
"utf8" flag disabled |
1138 |
When "utf8" is disabled (the default), then "encode"/"decode" |
1139 |
generate and expect Unicode strings, that is, characters with high |
1140 |
ordinal Unicode values (> 255) will be encoded as such characters, |
1141 |
and likewise such characters are decoded as-is, no changes to them |
1142 |
will be done, except "(re-)interpreting" them as Unicode codepoints |
1143 |
or Unicode characters, respectively (to Perl, these are the same |
1144 |
thing in strings unless you do funny/weird/dumb stuff). |
1145 |
|
1146 |
This is useful when you want to do the encoding yourself (e.g. when |
1147 |
you want to have UTF-16 encoded JSON texts) or when some other layer |
1148 |
does the encoding for you (for example, when printing to a terminal |
1149 |
using a filehandle that transparently encodes to UTF-8 you certainly |
1150 |
do NOT want to UTF-8 encode your data first and have Perl encode it |
1151 |
another time). |
1152 |
|
1153 |
"utf8" flag enabled |
1154 |
If the "utf8"-flag is enabled, "encode"/"decode" will encode all |
1155 |
characters using the corresponding UTF-8 multi-byte sequence, and |
1156 |
will expect your input strings to be encoded as UTF-8, that is, no |
1157 |
"character" of the input string must have any value > 255, as UTF-8 |
1158 |
does not allow that. |
1159 |
|
1160 |
The "utf8" flag therefore switches between two modes: disabled means |
1161 |
you will get a Unicode string in Perl, enabled means you get an |
1162 |
UTF-8 encoded octet/binary string in Perl. |
1163 |
|
1164 |
"latin1" or "ascii" flags enabled |
1165 |
With "latin1" (or "ascii") enabled, "encode" will escape characters |
1166 |
with ordinal values > 255 (> 127 with "ascii") and encode the |
1167 |
remaining characters as specified by the "utf8" flag. |
1168 |
|
1169 |
If "utf8" is disabled, then the result is also correctly encoded in |
1170 |
those character sets (as both are proper subsets of Unicode, meaning |
1171 |
that a Unicode string with all character values < 256 is the same |
1172 |
thing as a ISO-8859-1 string, and a Unicode string with all |
1173 |
character values < 128 is the same thing as an ASCII string in |
1174 |
Perl). |
1175 |
|
1176 |
If "utf8" is enabled, you still get a correct UTF-8-encoded string, |
1177 |
regardless of these flags, just some more characters will be escaped |
1178 |
using "\uXXXX" then before. |
1179 |
|
1180 |
Note that ISO-8859-1-*encoded* strings are not compatible with UTF-8 |
1181 |
encoding, while ASCII-encoded strings are. That is because the |
1182 |
ISO-8859-1 encoding is NOT a subset of UTF-8 (despite the ISO-8859-1 |
1183 |
*codeset* being a subset of Unicode), while ASCII is. |
1184 |
|
1185 |
Surprisingly, "decode" will ignore these flags and so treat all |
1186 |
input values as governed by the "utf8" flag. If it is disabled, this |
1187 |
allows you to decode ISO-8859-1- and ASCII-encoded strings, as both |
1188 |
strict subsets of Unicode. If it is enabled, you can correctly |
1189 |
decode UTF-8 encoded strings. |
1190 |
|
1191 |
So neither "latin1" nor "ascii" are incompatible with the "utf8" |
1192 |
flag - they only govern when the JSON output engine escapes a |
1193 |
character or not. |
1194 |
|
1195 |
The main use for "latin1" is to relatively efficiently store binary |
1196 |
data as JSON, at the expense of breaking compatibility with most |
1197 |
JSON decoders. |
1198 |
|
1199 |
The main use for "ascii" is to force the output to not contain |
1200 |
characters with values > 127, which means you can interpret the |
1201 |
resulting string as UTF-8, ISO-8859-1, ASCII, KOI8-R or most about |
1202 |
any character set and 8-bit-encoding, and still get the same data |
1203 |
structure back. This is useful when your channel for JSON transfer |
1204 |
is not 8-bit clean or the encoding might be mangled in between (e.g. |
1205 |
in mail), and works because ASCII is a proper subset of most 8-bit |
1206 |
and multibyte encodings in use in the world. |
1207 |
|
1208 |
JSON and ECMAscript |
1209 |
JSON syntax is based on how literals are represented in javascript (the |
1210 |
not-standardised predecessor of ECMAscript) which is presumably why it |
1211 |
is called "JavaScript Object Notation". |
1212 |
|
1213 |
However, JSON is not a subset (and also not a superset of course) of |
1214 |
ECMAscript (the standard) or javascript (whatever browsers actually |
1215 |
implement). |
1216 |
|
1217 |
If you want to use javascript's "eval" function to "parse" JSON, you |
1218 |
might run into parse errors for valid JSON texts, or the resulting data |
1219 |
structure might not be queryable: |
1220 |
|
1221 |
One of the problems is that U+2028 and U+2029 are valid characters |
1222 |
inside JSON strings, but are not allowed in ECMAscript string literals, |
1223 |
so the following Perl fragment will not output something that can be |
1224 |
guaranteed to be parsable by javascript's "eval": |
1225 |
|
1226 |
use JSON::XS; |
1227 |
|
1228 |
print encode_json [chr 0x2028]; |
1229 |
|
1230 |
The right fix for this is to use a proper JSON parser in your javascript |
1231 |
programs, and not rely on "eval" (see for example Douglas Crockford's |
1232 |
json2.js parser). |
1233 |
|
1234 |
If this is not an option, you can, as a stop-gap measure, simply encode |
1235 |
to ASCII-only JSON: |
1236 |
|
1237 |
use JSON::XS; |
1238 |
|
1239 |
print JSON::XS->new->ascii->encode ([chr 0x2028]); |
1240 |
|
1241 |
Note that this will enlarge the resulting JSON text quite a bit if you |
1242 |
have many non-ASCII characters. You might be tempted to run some regexes |
1243 |
to only escape U+2028 and U+2029, e.g.: |
1244 |
|
1245 |
# DO NOT USE THIS! |
1246 |
my $json = JSON::XS->new->utf8->encode ([chr 0x2028]); |
1247 |
$json =~ s/\xe2\x80\xa8/\\u2028/g; # escape U+2028 |
1248 |
$json =~ s/\xe2\x80\xa9/\\u2029/g; # escape U+2029 |
1249 |
print $json; |
1250 |
|
1251 |
Note that *this is a bad idea*: the above only works for U+2028 and |
1252 |
U+2029 and thus only for fully ECMAscript-compliant parsers. Many |
1253 |
existing javascript implementations, however, have issues with other |
1254 |
characters as well - using "eval" naively simply *will* cause problems. |
1255 |
|
1256 |
Another problem is that some javascript implementations reserve some |
1257 |
property names for their own purposes (which probably makes them |
1258 |
non-ECMAscript-compliant). For example, Iceweasel reserves the |
1259 |
"__proto__" property name for its own purposes. |
1260 |
|
1261 |
If that is a problem, you could parse try to filter the resulting JSON |
1262 |
output for these property strings, e.g.: |
1263 |
|
1264 |
$json =~ s/"__proto__"\s*:/"__proto__renamed":/g; |
1265 |
|
1266 |
This works because "__proto__" is not valid outside of strings, so every |
1267 |
occurrence of ""__proto__"\s*:" must be a string used as property name. |
1268 |
|
1269 |
If you know of other incompatibilities, please let me know. |
1270 |
|
1271 |
JSON and YAML |
1272 |
You often hear that JSON is a subset of YAML. This is, however, a mass |
1273 |
hysteria(*) and very far from the truth (as of the time of this |
1274 |
writing), so let me state it clearly: *in general, there is no way to |
1275 |
configure JSON::XS to output a data structure as valid YAML* that works |
1276 |
in all cases. |
1277 |
|
1278 |
If you really must use JSON::XS to generate YAML, you should use this |
1279 |
algorithm (subject to change in future versions): |
1280 |
|
1281 |
my $to_yaml = JSON::XS->new->utf8->space_after (1); |
1282 |
my $yaml = $to_yaml->encode ($ref) . "\n"; |
1283 |
|
1284 |
This will *usually* generate JSON texts that also parse as valid YAML. |
1285 |
Please note that YAML has hardcoded limits on (simple) object key |
1286 |
lengths that JSON doesn't have and also has different and incompatible |
1287 |
unicode character escape syntax, so you should make sure that your hash |
1288 |
keys are noticeably shorter than the 1024 "stream characters" YAML |
1289 |
allows and that you do not have characters with codepoint values outside |
1290 |
the Unicode BMP (basic multilingual page). YAML also does not allow "\/" |
1291 |
sequences in strings (which JSON::XS does not *currently* generate, but |
1292 |
other JSON generators might). |
1293 |
|
1294 |
There might be other incompatibilities that I am not aware of (or the |
1295 |
YAML specification has been changed yet again - it does so quite often). |
1296 |
In general you should not try to generate YAML with a JSON generator or |
1297 |
vice versa, or try to parse JSON with a YAML parser or vice versa: |
1298 |
chances are high that you will run into severe interoperability problems |
1299 |
when you least expect it. |
1300 |
|
1301 |
(*) I have been pressured multiple times by Brian Ingerson (one of the |
1302 |
authors of the YAML specification) to remove this paragraph, despite |
1303 |
him acknowledging that the actual incompatibilities exist. As I was |
1304 |
personally bitten by this "JSON is YAML" lie, I refused and said I |
1305 |
will continue to educate people about these issues, so others do not |
1306 |
run into the same problem again and again. After this, Brian called |
1307 |
me a (quote)*complete and worthless idiot*(unquote). |
1308 |
|
1309 |
In my opinion, instead of pressuring and insulting people who |
1310 |
actually clarify issues with YAML and the wrong statements of some |
1311 |
of its proponents, I would kindly suggest reading the JSON spec |
1312 |
(which is not that difficult or long) and finally make YAML |
1313 |
compatible to it, and educating users about the changes, instead of |
1314 |
spreading lies about the real compatibility for many *years* and |
1315 |
trying to silence people who point out that it isn't true. |
1316 |
|
1317 |
Addendum/2009: the YAML 1.2 spec is still incompatible with JSON, |
1318 |
even though the incompatibilities have been documented (and are |
1319 |
known to Brian) for many years and the spec makes explicit claims |
1320 |
that YAML is a superset of JSON. It would be so easy to fix, but |
1321 |
apparently, bullying people and corrupting userdata is so much |
1322 |
easier. |
1323 |
|
1324 |
SPEED |
1325 |
It seems that JSON::XS is surprisingly fast, as shown in the following |
1326 |
tables. They have been generated with the help of the "eg/bench" program |
1327 |
in the JSON::XS distribution, to make it easy to compare on your own |
1328 |
system. |
1329 |
|
1330 |
First comes a comparison between various modules using a very short |
1331 |
single-line JSON string (also available at |
1332 |
<http://dist.schmorp.de/misc/json/short.json>). |
1333 |
|
1334 |
{"method": "handleMessage", "params": ["user1", |
1335 |
"we were just talking"], "id": null, "array":[1,11,234,-5,1e5,1e7, |
1336 |
1, 0]} |
1337 |
|
1338 |
It shows the number of encodes/decodes per second (JSON::XS uses the |
1339 |
functional interface, while JSON::XS/2 uses the OO interface with |
1340 |
pretty-printing and hashkey sorting enabled, JSON::XS/3 enables shrink. |
1341 |
JSON::DWIW/DS uses the deserialise function, while JSON::DWIW::FJ uses |
1342 |
the from_json method). Higher is better: |
1343 |
|
1344 |
module | encode | decode | |
1345 |
--------------|------------|------------| |
1346 |
JSON::DWIW/DS | 86302.551 | 102300.098 | |
1347 |
JSON::DWIW/FJ | 86302.551 | 75983.768 | |
1348 |
JSON::PP | 15827.562 | 6638.658 | |
1349 |
JSON::Syck | 63358.066 | 47662.545 | |
1350 |
JSON::XS | 511500.488 | 511500.488 | |
1351 |
JSON::XS/2 | 291271.111 | 388361.481 | |
1352 |
JSON::XS/3 | 361577.931 | 361577.931 | |
1353 |
Storable | 66788.280 | 265462.278 | |
1354 |
--------------+------------+------------+ |
1355 |
|
1356 |
That is, JSON::XS is almost six times faster than JSON::DWIW on |
1357 |
encoding, about five times faster on decoding, and over thirty to |
1358 |
seventy times faster than JSON's pure perl implementation. It also |
1359 |
compares favourably to Storable for small amounts of data. |
1360 |
|
1361 |
Using a longer test string (roughly 18KB, generated from Yahoo! Locals |
1362 |
search API (<http://dist.schmorp.de/misc/json/long.json>). |
1363 |
|
1364 |
module | encode | decode | |
1365 |
--------------|------------|------------| |
1366 |
JSON::DWIW/DS | 1647.927 | 2673.916 | |
1367 |
JSON::DWIW/FJ | 1630.249 | 2596.128 | |
1368 |
JSON::PP | 400.640 | 62.311 | |
1369 |
JSON::Syck | 1481.040 | 1524.869 | |
1370 |
JSON::XS | 20661.596 | 9541.183 | |
1371 |
JSON::XS/2 | 10683.403 | 9416.938 | |
1372 |
JSON::XS/3 | 20661.596 | 9400.054 | |
1373 |
Storable | 19765.806 | 10000.725 | |
1374 |
--------------+------------+------------+ |
1375 |
|
1376 |
Again, JSON::XS leads by far (except for Storable which non-surprisingly |
1377 |
decodes a bit faster). |
1378 |
|
1379 |
On large strings containing lots of high Unicode characters, some |
1380 |
modules (such as JSON::PC) seem to decode faster than JSON::XS, but the |
1381 |
result will be broken due to missing (or wrong) Unicode handling. Others |
1382 |
refuse to decode or encode properly, so it was impossible to prepare a |
1383 |
fair comparison table for that case. |
1384 |
|
1385 |
SECURITY CONSIDERATIONS |
1386 |
When you are using JSON in a protocol, talking to untrusted potentially |
1387 |
hostile creatures requires relatively few measures. |
1388 |
|
1389 |
First of all, your JSON decoder should be secure, that is, should not |
1390 |
have any buffer overflows. Obviously, this module should ensure that and |
1391 |
I am trying hard on making that true, but you never know. |
1392 |
|
1393 |
Second, you need to avoid resource-starving attacks. That means you |
1394 |
should limit the size of JSON texts you accept, or make sure then when |
1395 |
your resources run out, that's just fine (e.g. by using a separate |
1396 |
process that can crash safely). The size of a JSON text in octets or |
1397 |
characters is usually a good indication of the size of the resources |
1398 |
required to decode it into a Perl structure. While JSON::XS can check |
1399 |
the size of the JSON text, it might be too late when you already have it |
1400 |
in memory, so you might want to check the size before you accept the |
1401 |
string. |
1402 |
|
1403 |
Third, JSON::XS recurses using the C stack when decoding objects and |
1404 |
arrays. The C stack is a limited resource: for instance, on my amd64 |
1405 |
machine with 8MB of stack size I can decode around 180k nested arrays |
1406 |
but only 14k nested JSON objects (due to perl itself recursing deeply on |
1407 |
croak to free the temporary). If that is exceeded, the program crashes. |
1408 |
To be conservative, the default nesting limit is set to 512. If your |
1409 |
process has a smaller stack, you should adjust this setting accordingly |
1410 |
with the "max_depth" method. |
1411 |
|
1412 |
Something else could bomb you, too, that I forgot to think of. In that |
1413 |
case, you get to keep the pieces. I am always open for hints, though... |
1414 |
|
1415 |
Also keep in mind that JSON::XS might leak contents of your Perl data |
1416 |
structures in its error messages, so when you serialise sensitive |
1417 |
information you might want to make sure that exceptions thrown by |
1418 |
JSON::XS will not end up in front of untrusted eyes. |
1419 |
|
1420 |
If you are using JSON::XS to return packets to consumption by JavaScript |
1421 |
scripts in a browser you should have a look at |
1422 |
<http://blog.archive.jpsykes.com/47/practical-csrf-and-json-security/> |
1423 |
to see whether you are vulnerable to some common attack vectors (which |
1424 |
really are browser design bugs, but it is still you who will have to |
1425 |
deal with it, as major browser developers care only for features, not |
1426 |
about getting security right). |
1427 |
|
1428 |
INTEROPERABILITY WITH OTHER MODULES |
1429 |
"JSON::XS" uses the Types::Serialiser module to provide boolean |
1430 |
constants. That means that the JSON true and false values will be |
1431 |
comaptible to true and false values of iother modules that do the same, |
1432 |
such as JSON::PP and CBOR::XS. |
1433 |
|
1434 |
THREADS |
1435 |
This module is *not* guaranteed to be thread safe and there are no plans |
1436 |
to change this until Perl gets thread support (as opposed to the |
1437 |
horribly slow so-called "threads" which are simply slow and bloated |
1438 |
process simulations - use fork, it's *much* faster, cheaper, better). |
1439 |
|
1440 |
(It might actually work, but you have been warned). |
1441 |
|
1442 |
THE PERILS OF SETLOCALE |
1443 |
Sometimes people avoid the Perl locale support and directly call the |
1444 |
system's setlocale function with "LC_ALL". |
1445 |
|
1446 |
This breaks both perl and modules such as JSON::XS, as stringification |
1447 |
of numbers no longer works correctly (e.g. "$x = 0.1; print "$x"+1" |
1448 |
might print 1, and JSON::XS might output illegal JSON as JSON::XS relies |
1449 |
on perl to stringify numbers). |
1450 |
|
1451 |
The solution is simple: don't call "setlocale", or use it for only those |
1452 |
categories you need, such as "LC_MESSAGES" or "LC_CTYPE". |
1453 |
|
1454 |
If you need "LC_NUMERIC", you should enable it only around the code that |
1455 |
actually needs it (avoiding stringification of numbers), and restore it |
1456 |
afterwards. |
1457 |
|
1458 |
BUGS |
1459 |
While the goal of this module is to be correct, that unfortunately does |
1460 |
not mean it's bug-free, only that I think its design is bug-free. If you |
1461 |
keep reporting bugs they will be fixed swiftly, though. |
1462 |
|
1463 |
Please refrain from using rt.cpan.org or any other bug reporting |
1464 |
service. I put the contact address into my modules for a reason. |
1465 |
|
1466 |
SEE ALSO |
1467 |
The json_xs command line utility for quick experiments. |
1468 |
|
1469 |
AUTHOR |
1470 |
Marc Lehmann <schmorp@schmorp.de> |
1471 |
http://home.schmorp.de/ |
1472 |
|