What frontend implementation/package does TypingMind use for client-side AI chat streaming?

I’m trying to build a client-side llm interface similar to TypingMind, where users can input their own API keys and interact directly with AI models without a backend server.

I’ve explored the Vercel AI SDK, but it seems to require a backend server for proxying requests. TypingMind appears to work entirely client-side while maintaining features like:

  • Streaming responses
  • Direct API calls using user-provided keys
  • Multiple model support
  • Conversation management

Does anyone know what implementation or package TypingMind uses to achieve this? Or alternatively, what would be the recommended approach to build a similar client-side only architecture?

I’m particularly interested in:

  1. How they handle streaming responses directly from OpenAI
  2. The library/method they use for managing API calls
  3. Any potential open-source alternatives that support client-side only implementation

Note: I understand TypingMind’s exact implementation might not be public, but I’m looking for technical approaches that could achieve similar functionality.

How to use AI chat library directly in client-side without server proxy?

I’m trying to implement a chat interface using an AI chat library that currently requires a backend server to proxy all LLM (Language Learning Model) calls. However, I need to make these calls directly from the client-side without a backend server.

The current implementation uses the useChat hook in React, which assumes making a fetch call to the server with a specific response type. I’ve tried passing a remote URL for the model provider’s API directly into the useChat config, but this doesn’t work completely because the decoding step (which typically happens on the server) is missing.

// Attempted solution (not working)
useChat({
  api: 'https://direct-model-provider-url.com/api'
})

Use cases for this include:

Client-side apps where users manage their own API keys locally
Applications working with local AI models
Integration with local instances (like Ollama)
While I understand the security implications of exposing API keys on the client side, there are valid use cases where server-side proxying isn’t necessary or desired.

Is there a way to use this library entirely on the client side while maintaining the same abstractions around models? If not, what alternatives would you recommend?

Current workaround mentioned:

Using a custom fetch function to replace the fetch behavior
Still missing the decoding step functionality
Any suggestions on how to implement this properly?

https://github.com/vercel/ai/issues/208

Can this type of chart be drawn with chartjs, and what would a chart of this type be called?

I want to create a visual representation of items in a store being out of stock over a period of time.

I want to draw a chart, with dates at the bottom along the X axis and items being listed along the Y axis.

I’ve attached an image with a crude example of what I have in mind (slapped together quickly in google sheets).

The questions I have regarding this are as stated in the title of the post: What would this type of chart be called? I’m asking this to help me find a javascript library that will help me achieve this.

Secondly, for those knowledgeable about chart.js – will this library be able to draw something like this without a ton of customization being required?

Lastly, what other libraries would be able to draw something like this?enter image description here

Cant connect to mysql database via JS… Error: getaddrinfo ENOTFOUND

Moving to an online database hosted by IONOS from localhost and having trouble connecting to the database via JS, but when I use php and the same database information I can connect and query perfectly fine.

Why am I getting error connecting to DB using JS code?

I do not want to rewrite my files in php as I am very unfamiliar and I already have tons of my files written in JS already when it ran fine with localhost.

JS Code:

import mysql from 'mysql';
import env from "dotenv";

env.config();

// Create a connection 
const connection = mysql.createConnection({
    host: process.env.DB_HOST,
    user: process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    database: process.env.DB_NAME,
    port: process.env.DB_PORT
});

// Test the connection
connection.connect()

connection.query('SELECT * FROM unit', (err, rows) => {
  if (err) throw err;
  console.log('Data received from Db:',rows[0].soluition);
});

Error: Error Snapshot

file:///C:/Users/nunzi/Desktop/VFM/vfm-backend/src/db.js:19
if (err) throw err;
^

Error: getaddrinfo ENOTFOUND XXXXXXXXXXXXXXX
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)
——————–
at Protocol._enqueue (C:UsersnunziDesktopVFMvfm-backendnode_modulesmysqllibprotocolProtocol.js:144:48)
at Protocol.handshake (C:UsersnunziDesktopVFMvfm-backendnode_modulesmysqllibprotocolProtocol.js:51:23)
at Connection.connect (C:UsersnunziDesktopVFMvfm-backendnode_modulesmysqllibConnection.js:116:18)
at file:///C:/Users/nunzi/Desktop/VFM/vfm-backend/src/db.js:16:12
at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
at async onImport.tracePromise.proto (node:internal/modules/esm/loader:547:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5) {
errno: -3008,
code: ‘ENOTFOUND’,
syscall: ‘getaddrinfo’,
hostname: ‘XXXXXXXXXXXXXX’,
fatal: true
}

Node.js v22.12.0

Note: I changed hostname to X’s just for example.

Why is the Error handling middleware the preferred option in an express app

In an express application I am using the MVC pattern. In my controller I call a method entry point to my Model (which in this case is a small service class which receives a DAO) which also include their own try catch block to account for either a Database Connection Error or an Error in which no user is found in the database. If any validation errors occur before this process in a previous run middleware then I deal with the error within the controller. The controller itself is custom middleware function that handles the business logic when an admin user wants to sign in.

exports.login = async (req, res, next) => {
  try {
    const errors = validationResult(req);
    if (!errors.isEmpty()) {
      const error = new ValidationError(422,'validation error', errors);
      throw error;
    }
    const loginService = new LoginService(AdminDAO, req.body.username, req.body.password);
    const userId = await loginService.authenticateUser();
    req.session.adminUserId = userId; // this is only run if an error is not thrown either in my service class or DAO when calling this method 
    res.redirect("/admin");
  } catch (e) {
    return next(e);
  }
};

The Error handling middleware checks to see the type of error and based on the error sends a response back to the client. I do this by either running an if else statement to check the error type or storing error objects in an array and extracting the one which matches the type or which one matches the instanceof the Error class, adhering to the open/closed principle using a strategy type pattern with a polymorphic method to return a response.

However I fail to see the advantage of this. Why dont I just create the a polymorphic method named something like handleError(res) in every Error class which handles the Error and sends a response and then call this within my catch block instead of passing the object to next() for it to then have to deal with the response by doing the exact thing I could have done in the Error class before calling next ? Like this :

exports.login = async (req, res, next) => {
  try {
    const errors = validationResult(req);
    if (!errors.isEmpty()) {
      const error = new ValidationError(422,'validation error',  errors);
      throw error;
    }
    const loginService = new LoginService(AdminDAO, req.body.username, req.body.password);
    const userId = await loginService.authenticateUser();
    req.session.adminUserId = userId; // this is only run if an error is not thrown either in my service class or DAO when calling this method 
    res.redirect("/admin");
  } catch (e) {
    return e.handleError(res) // this method returns the error to   the client 
  }
};

Unable to get the loading state while submitting the form using formik

I have a form component that includes an input field of type number and a button. I am using Formik to manage the form’s state. The form component contains a modal that opens when I click the submit button.

The issue I am facing is that there is another button inside the modal, which is type submit. Clicking this button submits the form, but I am unable to access the isSubmitting state from Formik.

import React, { useState } from "react";
import { Formik, Form, Field } from "formik";

const FormWithModal = () => {
  const [isModalOpen, setModalOpen] = useState(false);

  return (
    <div>
      <h1>Form with Modal</h1>
      <Formik
        initialValues={{ numberInput: "" }}
        onSubmit={async (values, { setSubmitting }) => {
                 setSubmitting(true);
                 await API call.....
                 setSubmitting(false);
         
        }}
      >
        {({ isSubmitting }) => (
          <Form>
            <div>
              <label htmlFor="numberInput">Number:</label>
              <Field
                id="numberInput"
                name="numberInput"
                type="number"
                placeholder="Enter a number"
              />
            </div>
            <button onClick=(()=>setModalOpen(prev=>!prev))>
              Submit
            </button>

            {isModalOpen && (
              <div className="modal">
                <div className="modal-content">
                  <h2>Modal</h2>
                  <p>The form has been submitted!</p>
               
         
                  <button
                    type="submit"
                    disabled={isSubmitting || !isValid}
                    loading={isSubmitting}
                  >
                    Modal Submit
                  </button>
                </div>
              </div>
            )}
          </Form>
        )}
      </Formik>
    </div>
  );
};

export default FormWithModal;


Remove ‘Ship to a different address?’ checkbox while keeping WooCommerce’s checkout shipping fields enabled

I want to remove the “Ship to a different address?” checkbox from the WooCommerce checkout page, but I am encountering an issue with the checkout calculations.

Here’s the scenario:

  • I want to keep the “Ship to a different address?” functionality checked by default (always using the shipping address for billing).
  • When I remove the checkbox using a WooCommerce hook like remove_action, the JavaScript logic in checkout.min.js treats the checkbox as unchecked.
  • As a result, WooCommerce uses the billing address for calculations instead of the shipping address.

Currently, I’ve modified the checkout.min.js file to always consider shipping fields in the update_checkout_action function:

update_checkout_action: function (o) {
    var r = e("#shipping_country").val(),
        c = e("#shipping_state").val(),
        i = e("#shipping_postcode").val(),
        n = e("#shipping_city").val(),
        a = e("#shipping_address_1").val(),
        u = e("#shipping_address_2").val(),
        d = r,
        s = c,
        m = i,
        l = n,
        p = a,
        h = u;

    var g = {
        s_country: d,
        s_state: s,
        s_postcode: m,
        s_city: l,
        s_address: p,
        s_address_2: h,
        // other data...
    };
    // ajax call...
}

This modification works as expected. However, I want to avoid modifying the core checkout.min.js file.

A way to hook into WooCommerce to remove the checkbox (#ship-to-different-address) from the DOM.
Ensure that WooCommerce calculations still use the shipping address, even if the checkbox is removed.
Avoid modifying the core WooCommerce files.

Is there a way to achieve this using hooks, custom JavaScript, or another clean method?

How to remove ‘Ship to a different address?’ checkbox without breaking WooCommerce’s checkout calculations?

I want to remove the “Ship to a different address?” checkbox from the WooCommerce checkout page, but I am encountering an issue with the checkout calculations.

Here’s the scenario:

  • I want to keep the “Ship to a different address?” functionality checked by default (always using the shipping address for billing).
  • When I remove the checkbox using a WooCommerce hook like remove_action, the JavaScript logic in checkout.min.js treats the checkbox as unchecked.
  • As a result, WooCommerce uses the billing address for calculations instead of the shipping address.

Currently, I’ve modified the checkout.min.js file to always consider shipping fields in the update_checkout_action function:

update_checkout_action: function (o) {
    var r = e("#shipping_country").val(),
        c = e("#shipping_state").val(),
        i = e("#shipping_postcode").val(),
        n = e("#shipping_city").val(),
        a = e("#shipping_address_1").val(),
        u = e("#shipping_address_2").val(),
        d = r,
        s = c,
        m = i,
        l = n,
        p = a,
        h = u;

    var g = {
        s_country: d,
        s_state: s,
        s_postcode: m,
        s_city: l,
        s_address: p,
        s_address_2: h,
        // other data...
    };
    // ajax call...
}

This modification works as expected. However, I want to avoid modifying the core checkout.min.js file.

A way to hook into WooCommerce to remove the checkbox (#ship-to-different-address) from the DOM.
Ensure that WooCommerce calculations still use the shipping address, even if the checkbox is removed.
Avoid modifying the core WooCommerce files.
Is there a way to achieve this using hooks, custom JavaScript, or another clean method?

BeforeUnload – Safari (vanilla.js)

I’m making a test-taking implementation for my classroom (I’m a high school teacher), and I’m trying to distinguish if someone leaves the page because they are using a different tab (visibilitychange) a different app (blur) or navigating to a different page within this tab. Works perfectly fine in Chrome. Works inconsistently in Safari (MacOS).

https://sandbox-transparent-weaver.glitch.me/index.html

I’ve got

    window.addEventListener("blur", function () {
      console.log("Blurred");
    });

which works exactly as expected when I blur the page but it’s still visible

    window.addEventListener("visibilitychange", function () {
      console.log("Change Vis");
    });

which works exactly as expected when i use a different tab

My problem is this:

    window.addEventListener("beforeunload", function () {
      event.preventDefault();
      console.log("Before Unload");
    });

Which works but only in limited cases. It works effectively for refresh, close-tab, and form-submit, but it does not work at all for clicking a link on the page, selecting a bookmark, or entering a new URL in the navbar.

Help, please?!

How to handle page scrolling with upward infinite scrolling with Javascript

I have an application with a page that has a chat feature. I want this to look and feel like most chat apps, the newest messages are at the bottom of the page, and you scroll up to see more previous messages.

I’ve got the pagination aspect of the infinite scroll working fine. When you scroll to the top of the current page, you get the next set of previous messages.

Unfortunately, when you get to the top of the page, the next page loads in, but the scroll position stays at the top of the page, essentially “skipping” you to the top of the next page of messages. It’s a bad UX.

Atm, I’m adding “markers” for each page. Just divs with ids for the page number. When the next set of messages loads in, I scroll to the latest page marker, keeping the UX better.

This just feels hacky, and I feel like I’m missing something more simple to keep the scroll from behaving this way. Any thoughts would be appreciated!

Issue retrieving Token with application/x-www-form-urlencoded using Bruno Script

I’m using Bruno and encountering an issue with a script to fetch a token from an endpoint.

When the Content-Type is application/json, everything works as expected, and I can retrieve and set the token successfully. Here’s the working script for application/json:

const axios = require("axios");
const https = require("https");
const httpsAgent = new https.Agent({
    rejectUnauthorized: false
});
var path = "https://private-api-dev.pre/internal/jwt/generate";
var headers = {
    "Content-Type": "application/json",
    "Cache-Control": "no-cache",
    "client-id": "123-456-789",
    "client-secret": "123456789"
};
var body = JSON.stringify({
    subject: "est",
    audience: "customers",
    miscInformation: "testing"
});
const auth = await axios.post(path, body, {
    headers,
    httpsAgent
})
    .then((response) => {
        bru.setEnvVar("jwt_token_dev", response.data.jwt);
        console.log("Fetched JWT Token:", response.data.jwt);
    })
    .catch((error) => {
        console.error("Error fetching token:", error.message);
    });

However, when the endpoint requires the Content-Type to be application/x-www-form-urlencoded, the script doesn’t work, and the token isn’t updated. I’ve adjusted the body and headers like this:

const qs = require('qs');
var headers = {
    "Content-Type": "application/x-www-form-urlencoded",
    "Cache-Control": "no-cache",
    "client-id": "123-456-789",
    "client-secret": "123456789"
};
var body = qs.stringify({
    subject: "est",
    audience: "customers",
    miscInformation: "testing"
});
const auth = await axios.post(path, body, {
    headers,
    httpsAgent
})
    .then((response) => {
        bru.setEnvVar("jwt_token_dev", response.data.jwt);
        console.log("Fetched JWT Token:", response.data.jwt);
    })
    .catch((error) => {
        console.error("Error fetching token:", error.message);
    });

Despite the changes, the jwt_token_dev variable isn’t updated. I’ve confirmed that when calling the endpoint directly, everything works fine, and I receive the token.
I also tried:

const body = new URLSearchParams({
    subject: "est",
    audience: "customers",
    miscInformation: "testing",
}).toString();

Also fetch:

fetch(path, {
    method: "POST",
    headers: {
        "Content-Type": "application/x-www-form-urlencoded",
        "x-ibm-client-id": "client-id",
        "x-ibm-client-secret": "client-secret",
    },
    body: new URLSearchParams({
        subject: "est",
        audience: "customers",
        miscInformation: "testing",
    }),
    agent: httpsAgent,
})

Is there something I’m missing when using application/x-www-form-urlencoded in this script?

Chrome Extension: declarativeNetRequest regex does not work

I have created a minimal example of a redirect behavior and I am unable to make it work no matter what I do.

Starting with simple block rule, a simple regex for bbc.com works fine:

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "block",
        },
        "condition": {
            "regexFilter": "^https://www\.bbc\.com(/.*)?$",
            "resourceTypes": [
                "main_frame"
            ]
        }
    }
]

If I change the same rule to a redirect rule, it stops working. It doesn’t even show as a “matched ruleset” if I try to log using declarativeNetRequestFeedback

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "redirect",
            "redirect": {
                "url": "https://www.amazon.de"
            }
        },
        "condition": {
            "regexFilter": "^https://www\.bbc\.com(/.*)?$",
            "resourceTypes": [
                "main_frame"
            ]
        }
    }
]

My ultimate goal is to use a simple regex substition, but of course that also doesn’t work. Something like:

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "redirect",
            "redirect": {
                "regexSubstitution": "https://www.amazon.de\1"
            }
        },
        "condition": {
            "regexFilter": "^https://www\.bbc\.com(/.*)?$",
            "resourceTypes": [
                "main_frame"
            ]
        }
    }
]

Note: these are just examples to create a minimal example of what’s not working. The end goal for my actual use case is to use regexSubstitution to manipulate and redirect to a different url.

Vue.js xterm.js FitAddon Called Too Early, Terminal Doesn’t Resize To The Correct Dimensions

I’m working on a Vue.js application where I’m integrating xterm.js to display terminal logs. I’m using the FitAddon to ensure the terminal resizes correctly within its container. However, I’m encountering an issue where the fitAddon.fit() method is called too early. As a result, the terminal dimensions are initially set to zero, and the terminal doesn’t fit correctly until I manually resize the browser window, which then triggers the fitAddon to work as expected.

Here’s what’s happening:

  1. Initialization:

    The terminal initializes with fitAddon.fit() during the onMounted lifecycle hook.
    At this point, the terminalContainer has offsetWidth and offsetHeight of 0, so the terminal doesn’t render properly.

  2. Manual Resize:

    When I manually resize the window, the handleResize method is triggered, calling fitAddon.fit() again.
    This time, the terminal resizes correctly because the container has valid dimensions.

Relevant Code Snippets:

logs.vue

<template>
  <section class="logsContainer">
    <div class="loaderWrapper" v-if="isLoading">
      <MainComponentsLoader />
    </div>

    <div v-if="!isLoading && logsExists" class="searchBar">
      <input type="text" placeholder="Search..." v-model="search" />
    </div>

    <div v-else-if="!isLoading && !logsExists" class="noLogs">
      <img src="https://static.mobileye.com/zorro/img/no_data_image.png" alt="No Logs" />
      <p>Can’t find logs</p>
    </div>

    <div v-show="!isLoading && logsExists" class="terminalContainer" ref="terminalContainer"></div>
  </section>
</template>

<script setup>
import {
  ref,
  onMounted,
  watch,
  nextTick,
  onBeforeUnmount,
} from 'vue';
import { Terminal } from '@xterm/xterm';
import { FitAddon } from '@xterm/addon-fit';
import debounce from 'lodash/debounce';
import { logsObserver, logsObserverHttp, highlightGrep } from '../../../services/logs.service';
import { escapeRegExp } from '../../../services/utils.service';

const props = defineProps({
  logsData: { type: Object, required: true },
});

const search = ref('');
const terminalContainer = ref(null);
const isLoading = ref(true);
const logsExists = ref(false);
const grep = ref('');
const fetchedLogs = ref([]);

let term = null;
let fitAddon = null;
let subscription = null;

watch(search, (value) => {
  setGrep(value);
});

const setGrep = debounce((value) => {
  grep.value = value;
}, 200);

const initializeTerminal = () => {
  term = new Terminal({
    scrollback: 999999,
    allowTransparency: true,
    convertEol: true,
    theme: {
      background: 'white',
      foreground: '#495763',
      selectionBackground: '#0000ff55',
    },
  });

  fitAddon = new FitAddon();
  term.loadAddon(fitAddon);
  term.open(terminalContainer.value);

  fitAddon.fit();

  term.attachCustomKeyEventHandler((ev) => {
    if (ev.ctrlKey && ev.code === 'KeyC' && ev.type === 'keydown') {
      const selection = term.getSelection();
      if (selection) {
        navigator.clipboard?.writeText(selection);
        return false;
      }
    }
    return true;
  });
};

const handleResize = debounce(() => {
  if (fitAddon) {
    fitAddon.fit();
  }
}, 2000);

onMounted(async () => {
  window.addEventListener('resize', handleResize);
});

const subscribeToLogsObserver = (url) => {
  return logsObserver(url)
    .subscribe({
      next: (line) => {
        term.write(line + 'rn');
        isLoading.value = false;
      },
      error: (err) => {
        console.error(err);
        isLoading.value = false;
      }
    });
}

watch(() => terminalContainer.value, async (newVal) => {
  isLoading.value = true;
  await nextTick();

  initializeTerminal();

  subscription = subscribeToLogsObserver(props.logsData.url);
});

onBeforeUnmount(() => {
  window.removeEventListener('resize', handleResize);
  if (term) {
    term.dispose();
    term = null;
  }

  if (subscription) {
    subscription.unsubscribe();
    subscription = null;
  }
});
</script>

<style scoped lang="scss">
.logsContainer {
  /* styles */
}
</style>

Issue:

  • The fitAddon.fit() is called immediately after term.open(terminalContainer.value), but at this point, terminalContainer‘s offsetWidth and offsetHeight are 0.

  • Consequently, the terminal doesn’t render with the correct dimensions.

  • Only after resizing the window does the terminal fit correctly.

What I’ve Tried:

  1. Using nextTick:

    await nextTick();
    fitAddon.fit();
    

    Ensured DOM updates before fitting, but the issue persists.

  2. Delaying fitAddon.fit() with setTimeout:

    setTimeout(() => {
      fitAddon.fit();
    }, 100);
    

    Not reliable and sometimes still too early.

Question:

How can I ensure that fitAddon.fit() is called after the terminalContainer has been fully rendered and has valid dimensions in a Vue.js component? I want the terminal to initialize with the correct size without requiring a manual window resize.

Additional Information:

  • Vue.js Version: 3.x
  • xterm.js Version: Latest
  • Expected Behavior: Terminal fits correctly within its container upon initialization.
  • Observed Behavior: Terminal dimensions are 0 initially and only resize correctly after a manual window resize.

Splash screen for full page loading cycle in a Vue3 app

Basically, it’s Telegram Web App.
Before the full app loaded, I want to show a splash screen (regular html/css code).

I tried to use a directive v-cloak on the <body> element, it’s worked, but only for full-loaded page.
i.e. while the tag is loading, I see a blank screen with default progress bar at bottom (depends on implementation – phone or desktop).

Also I tried a <Suspense> tag, it’s not worked for me.

Could you suggest a solution for this?

Thanks!

why is Swiper Js next button not working?

Hey Guys so i have this code that loops through an object of products to create product cards automatically instead of manually adding the HTML it also creates a swiper structure , the issue is when i click next on the swiper it doesn’t display any card but the previous button works well can i know why

    import { products } from "./data/products.js";

function generateProductCards() {
  const productsContainer = document.querySelector(".products-container");

  // Create the Swiper structure
  let productsContainerHTML = `
    <div class='swiper'>
      <div class='swiper-wrapper'>
  `;

  // Loop through products to create slides
  products.forEach((product) => {
    const firstModel = product.models[0];
    productsContainerHTML += `
      <div class='swiper-slide product-card'>
        <img class='product-image' src='${firstModel.image}' alt='${product.name}'>
        <div class='product-text'>
          <h1 class='product-title'>${product.name}</h1>
          <h2 class='product-price'>${product.price} DA</h2>
          <button class='buy-now'>BUY NOW</button>
        </div>
      </div>
    `;
  });

  // Close the Swiper structure
  productsContainerHTML += `
      </div>
      <div class='swiper-pagination'></div>
      <div class='swiper-button-next'></div>
      <div class='swiper-button-prev'></div>
    </div>
  `;

  // Set the HTML content
  productsContainer.innerHTML = productsContainerHTML;

 
}

generateProductCards();

function initializeSwiper(){
  new Swiper('.swiper', {
    effect: 'slide',
    slidesPerView: 4,
    spaceBetween: 10, // Add spacing between slides
    loop: true,       // Enable looping
    grabCursor: true, // Add grab cursor for better UX
    navigation: {
      nextEl: '.swiper-button-next',
      prevEl: '.swiper-button-prev',
    },
    pagination: {
      el: '.swiper-pagination',
      clickable: true,
    },
  });
}

document.addEventListener('DOMContentLoaded',initializeSwiper());