I’m conducting experiments with LangchainJS, Pinecone, and OpenAI.
My goal is to use the model (for now GPT-3, as I still don’t have GPT-4 API access) to assist me in my coding job by adding new data on specific frameworks.
I have already indexed the craft.js documentation and codebase, and by using both ConversationalRetrievalQAChain or Agent by creating a tool for this specific index, the model is quite capable of answering my questions.
The issue I’m facing is that I feel it loses its base capacities. For example, it is never able to generate code based on that information, or worse, if I ask it to generate code on another framework (e.g., Next.js), it either tells me it doesn’t know or gives me a nonsensical response like “Even though we can create Next.js components using Craft.js, it is not the best tool to achieve this task.”
But if I try this prompt:
Generate a Next.js component that allows file uploads.
in the playground or with model.call(prompt), it generate a nearly perfect code.
It seems that when I add embeddings to the model, it becomes overly focused on this new data and loses some of its power.
Is there a way to maintain the power of the base model and add new knowledge to it? If so, what is the best approach to achieve this?