I’ve been using Chat GPT model Text-davinci-003 and having no issues generating responses and getting the full ChatGPT experience. Problem is once I swap it out the davinci model for the new gpt-3.5-turbo I immediately get a 400 error. I know that could be lots of reasons, but it seems like based off of the OpenAI docs that it should be pretty interchangeable by swapping out text models.
Is there more I need to adjust besides the texting model? Did I completely miss another aspect of the build? Thank you for the advice!
Working JS (which generates responses) with text-davinci-003:
app.post("/chatgpt", (req, res) => __awaiter(void 0, void 0, void 0, function* () {
console.log(req.body.query);
try {
const response = yield openapi.createCompletion({
//Where you can change models for different responses
//model: "text-davinci-003" date: 3.13.23
model: "text-davinci-003",
prompt: req.body.query,
//max_tokens: 64 date: 3.13.23
max_tokens: 128,
//set temperature to 0 for no variability - 0.8 max for randomness
temperature: 0.8,
});
res.json({
success: true,
data: response.data.choices[0].text,
});
}
Broken “400 error” JS with gpt-3.5-turbo:
app.post("/chatgpt", (req, res) => __awaiter(void 0, void 0, void 0, function* () {
console.log(req.body.query);
try {
const response = yield openapi.createCompletion({
//Where you can change models for different responses
//model: "text-davinci-003" date: 3.13.23
model: "gpt-3.5-turbo",
prompt: req.body.query,
//max_tokens: 64 date: 3.13.23
max_tokens: 128,
//set temperature to 0 for no variability - 0.8 max for randomness
temperature: 0.8,
});
res.json({
success: true,
data: response.data.choices[0].text,
});
}