Difference between Python’s cv2.imread(), Tensorflow’s JS tf.node.decodeImage()

Comparing the printings for a same image between the two mentioned functions I get different information.
For example using a 100×100 jpeg image in Python like:

path = "./test.jpeg"
training_data = []
pic = cv2.imread(path)
pic = cv2.cvtColor(pic,cv2.COLOR_BGR2RGB)
training_data.append(pic)
print(training_data[0][0])

For the first three rows, my print shows:

[[180  71  51]
 [233 145 125]
 [206 155 136]

Then, using the same image in VS Code with tf.node like:

const file_path   = "./test.jpeg";
const img = fs.readFileSync(file_path);
const tensor = tf.node.decodeImage(img, 3);
console.log(tensor.print())

For the same three rows, my print shows in console:

 [[[177, 70 , 50 ],
   [231, 144, 124],
   [201, 153, 133],

Altough the values are very similar, they are clearly different. Inspecting other values, they even match some times. So my question is what’s the difference between these two functions that generate different results for the same image? Also, what result should I expect from tf.browser.fromPixels(), the Python like result or tf.node?