I am trying to build a chrome extension using transformers.js library in order to test its limits and see if onDevice ML can be a choice in some cases.
Based on the example provided in their official repo i am adding new features.
As i added the translation pipeline i saw something strange, namely that the model was downloading again and again after each opening and closing of the browser.
`class MyTranslationPipeline {
static task = "translation";
static model = "Xenova/nllb-200-distilled-600M";
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
console.log("Loading translation pipeline...");
this.instance = pipeline(this.task, this.model, {progress_callback});
}
return this.instance;
}
}`
The original example showcased env.allowLocalModels = false; but i set it back to true and it didn’t work, i am expecting that when i reopen the browser the model is to be loaded from chache where it already is.