Python’s zlib decompresses data, but Pako (JavaScript zlib) fails

I’m trying to inflate some zlib compressed data (Ren’Py Archive 3 archive file structure for those wondering) with JavaScript, but I can’t seem to reproduce the Python behavior in Node.js.

This Python script works:

import zlib

# Data written to a file from a different Python script, for demo purposes
# This would be a value in memory in JS
data = open("py", "rb")

# Works
print(
    zlib.decompress(data.read(), 0)
)

While this Node.js script:

const fs = require('fs');
const pako = require('pako');

const data = fs.readFileSync('py', 'binary');

// Doesn't work
console.log(
    pako.inflateRaw(data)
);

Throws this error:

C:UsersgunneDocumentsProgrammingnode.jsrpa-extractornode_modulespakolibinflate.js:384
  if (inflator.err) throw inflator.msg || msg[inflator.err];
                    ^
invalid stored block lengths
(Use `node --trace-uncaught ...` to show where the exception was thrown)

As per the Python zlib.decompress documentation, a wbits parameter (the second parameter) of 0 “automatically [determines] the window size from the zlib header,” something that the Pako implementation seemingly doesn’t do.

Am I doing something incorrectly? How would I achieve the same output as in Python using Node.js?