My Line Chatbot by using llamaIndex TS agent cant remember previous question

I created a LINE chatbot that uses the llamaindex typescript and it answers questions very well. and want to use an agent that can remember previous questions and can respond, which the agent can use when looping as in the code below.

import { OpenAI } from "llamaindex";
import fs from "node:fs/promises";
import { Document, VectorStoreIndex, QueryEngineTool, OpenAIAgent } from "llamaindex";
import readlineSync from 'readline-sync'; // Changed import to readline-sync

const path = "data/car.json";

async function main() {
  const essay = await fs.readFile(path, "utf-8");
  const document = new Document({ text: essay, id_: path });
  const index = await VectorStoreIndex.fromDocuments([document]);

  const individual_query_engine_tools = [
    new QueryEngineTool({
      queryEngine: index.asQueryEngine(),
      metadata: {
        name: "vector_index",
        description: "Useful when you want to answer questions about information about cars assembled in Thailand.",
      },
    }),
  ];

  const agent = new OpenAIAgent({
    tools: [...individual_query_engine_tools],
    verbose: true
  });

  while (true) {
    const userInput = readlineSync.question("Enter your question (type 'exit' to quit): ");
    if (userInput.toLowerCase() === 'exit') {
      console.log("Exiting...");
      break;
    }
    else if(userInput.toLowerCase() !== '') {
        const response = await agent.chat({
            message: userInput,
          });
        console.log(response.message.content);
      }
  }
}

main().catch(console.error);

output


Enter your question (type 'exit' to quit): toyota yaris price?
The price of the Toyota Yaris ranges from 559,000 to 709,000 baht.

Enter your question (type 'exit' to quit): color?
The Toyota Yaris is available in 6 colors: white, black, red, blue, gray, and yellow.

Enter your question (type 'exit' to quit): sound system?
The Toyota Yaris is equipped with a JBL sound system featuring 8 speakers, providing a high-quality audio experience.

But when I used it in a LINE chatbot project, it didn’t turn out as expected. Code as below

const line = require('@line/bot-sdk');
const express = require('express');
const axios = require('axios').default;
const dotenv = require('dotenv');
const { OpenAI } = require("llamaindex");
const fs = require("fs").promises;
const { Document, VectorStoreIndex, QueryEngineTool, OpenAIAgent } = require("llamaindex");
const readlineSync = require('readline-sync');

dotenv.config();

const lineConfig = {
    channelAccessToken: process.env.ACESS_TOKEN,
    channelSecret: process.env.SECRET_TOKEN
};

const client = new line.Client(lineConfig);
const app = express();

app.post('/webhook', line.middleware(lineConfig), async(req, res) => {
    try {
        const events = req.body.events; // Corrected variable name to events
        console.log('events =>>>>', events); // Corrected console log message

        if (events && events.length > 0) {
            await Promise.all(events.map(async(event) => {
                await handleEvent(event);
            }));
        }

        res.status(200).send('OK');
    } catch (error) {
        console.error('Error processing events:', error);
        res.status(500).end();
    }
});

const handleEvent = async(event) => {
    if (event.type !== 'message' || event.message.type !== 'text') {
        return null;
    } else if (event.type === 'message') {

        const userQuery = event.message.text;

        // setTimeout(function() {

        //     client.pushMessage(event.source.userId, {
        //         "type": "sticker",
        //         "packageId": "11538",
        //         "stickerId": "51626518"
        //     });

        // }, 2000);



        // Load the chatbot model and process the user query
        const path = "data/car.json";
        const essay = await fs.readFile(path, "utf-8");
        const document = new Document({ text: essay, id_: path });
        const index = await VectorStoreIndex.fromDocuments([document]);

        const individual_query_engine_tools = [
            new QueryEngineTool({
                queryEngine: index.asQueryEngine(),
                metadata: {
                    name: "vector_index",
                    description: "Useful when you want to answer questions about information about cars assembled in Thailand.",
                },
            }),
        ];

        const agent = new OpenAIAgent({
            tools: [...individual_query_engine_tools],
            verbose: true
        });

        try {
            const { response } = await agent.chat({ message: userQuery });

            // Second message with the actual response
            await client.pushMessage(event.source.userId, {
                type: 'text',
                text: response
            });

        } catch (error) {
            console.error("Error handling query:", error);
            return client.pushMessage(event.source.userId, {
                type: 'text',
                text: 'Sorry, there was an error processing your request.'
            });
        }
    }
};

app.listen(4000, () => {
    console.log('listening on 4000');
});

LINE chatbot I can answer my questions through LINE in the first sentence, but when I ask a question that requires memorizing the context from the previous question, it doesn’t remember and always thinks it’s the first question.

The answer is like this in Line chatbot.

enter image description here

My personal opinion is that this is because the code restarts every time an event occurs.
From code line 38 (I don’t know if I’m right.)

const handleEvent = async(event) => {

I am a 4th year university student in Thailand. And I want to try to study how to make a chatbot from llamaindex. So if I make a mistake in any part of the code I want everyone who reads this post to teach me. Thank you for coming to read. And I hope I can solve this problem as quickly as possible. Thank you!