Does the JavaScript engine optimize repeated array traversal in switch cases?

Consider a scenario where a piece of JavaScript code uses a for…of loop to iterate over an array, and within that loop, a switch statement is used. Each case performs the same operation to calculate the sum of the array elements using the reduce method:

const arr = [1, 2, 3, 10, 20, 30];

for (const el of arr) {
    switch (el) {
        case 1:
            const sum1 = arr.reduce((acc, curr) => acc + curr, 0);
            // ...
            break;

        case 2:
            const sum2 = arr.reduce((acc, curr) => acc + curr, 0);
            // ...
            break;

        case 3:
            const sum3 = arr.reduce((acc, curr) => acc + curr, 0);
            // ...
            break;

        // ...

        default:
            break;
    };
};

In this example, the reduce method is called in each case to compute the sum of the array elements. My question is: does the JavaScript engine optimize this repeated traversal in any way, or will it execute the array traversal independently for each case every time?

I understand that it is possible to refactor the code to perform the array traversal outside of the loop, which would prevent the repeated traversal. However, in that case, the traversal would always occur, even when it might not be necessary (e.g., if the loop does not reach certain cases).

What are the performance implications of this approach, and how do modern JavaScript engines handle such scenarios?

I want to know how to efficiently handle repeated array traversals in multiple switch cases, especially when some cases might not need the traversal at all.

What event to listen for when a user clicks submit button in html5 invalid form?

I have a html5 form with html5 a required field. I want to catch the user interaction if the user submits the incomplete form and they are shown the the error message.

I thought I could use the invalid event, yet apparently it does not work.

How can I catch the invalid user interaction?

E.g.:

console.log('register event handler');

oninvalid = (event) => {
  console.log('it happened', event); // is not reached
};

addEventListener("invalid", (event) => {
  console.log('hallo'); // is not reached either
});
<form>
  <div class="group">
    <input type="text" />
    <label>Normal</label>
  </div>
  <div class="group">
    <input type="text" required />
    <label>Required</label>
  </div>
  <input type="submit" />
</form>

Protect GET calls when authentication is with access-tokens

My question is not specific for a certain language or framework, and relevant to any authentication with access-token.

I have separate backend and frontend servers (in my case NodeJS and Vue-3).
Authentication method is an access token – user logs-in, receive an access-token and this one is sent in header with each request.

My problem is how to protect GET calls, especially for image and media html tags (<img src='....'/>).

I figured out that for “normal” GET calls, which I perform with axios, I can add the token as well:

axios.interceptors.request.use(function (config) {
    // Retrieve access token which was stored previously 
    const authStore = useAuthStore();
    const accessToken = authStore.accessToken;

    if (!!accessToken) {
        config.headers.Authorization = 'Bearer ' +  accessToken.value;
    }

    return config;
});

Which will look like this in curl:

curl --location 'http://api.mybackendserver.com' 
--header 'Authorization: Bearer XXXXX'

However, when I need an image for example:

<img src="http://api.mybackendserver.com/uploads/xxx">

I have a problem – either I allow public access to these files, which are private, or I cannot show image for user (in my case inside a text-editor, which makes the problem even more complex).

My question is:
What options do I have? What would be the best practice to implement it?

HTMLVideoElement load() throws uncatchable DOMException

The following code:

try {
    videoElem.load();
} catch (ignored) {}

Sometimes throws this exception in Firefox: Uncaught (in promise) DOMException: The fetching process for the media resource was aborted by the user agent at the user's request.

I think it has to do with Firefox not loading the video when the page is not visible, to save resources. This makes sense, but I would like to catch the exception and continue executing my code.

The statement is already in a try catch, but still the exception is uncaught. load() does not return a promise. How do I catch this error?

How to remove the current user while fetching all the users from the database in react?

How can I use ternary operator ( ? true : false ) or something else to remove the currrent logged in user from this list?

    const [users,setUsers] = useState("");

    useEffect(() => {
        axios.get("http://localhost:3000/api/v1/user/bulk?filter=" + filter)
            .then(response => {
                setUsers(response.data.user)
            })
    }, [filter])
    
    // Then using it like this, to get all the users data.

    {users.map(user => <User key={user._id} user={user} />)}

I have tried using this, but in terms of seperating users. I didn’t get it

How do I set up a Nuxt + Nitro + TypeORM application?

So I’m new to Nuxt ecossystem, I’m coming from years of Laravel/PHP use, even tho I’m familiar with Vue/TypeScript.

I just can’t make it work properly, the problem I’m facing is that I can’t set the proper TypeScript config for Nitro

TypeORM requires me to set both of those flags as true to work:

"emitDecoratorMetadata": true,
"experimentalDecorators": true,

The problem is, I’ve already set it on my main tsconfig.json and also in server/tsconfig.json, it just don’t work.

{
  "extends": "../.nuxt/tsconfig.server.json",
  "compilerOptions": {
    "target": "ESNext",
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true
  }
}

I got this error despite my correct tsconfig.json settings:

ERROR: Transforming JavaScript decorators to the configured target environment (“es2019”) is not supported yet
Transforming JavaScript decorators to the configured target environment (“es2019”) is not supported yet

It can’t figure out how to handle the TypeORM decorator:

@Entity()  // This one
export class User {
  @PrimaryColumn()
  id: number
}

Digging on the internet I found a workaround for that which is set the TypeScript config in the nuxt.config.ts like that:

  nitro: {
    esbuild: {
      options: {
        tsconfigRaw: {
          compilerOptions: {
            target: 'ESNext',
            experimentalDecorators: true,
          },
        },
      },
    },
  },

It partially solves the problem, but unfortunatelly the compilerOptions property doesn’t receive emitDecoratorMetadata prop so I get a new error:
(Object literal may only specify known properties, and ’emitDecoratorMetadata’ does not exist in type)

[worker init] Column type for User#id is not defined and cannot be guessed. Make sure you have turned on an “emitDecoratorMetadata”: true option in tsconfig.json. Also make sure you have imported “reflect-metadata” on top of the main entry file in your application (before any entity imported).If you are using JavaScript instead of TypeScript you must explicitly provide a column type.

I’m stuck here, how can I make it work properly?

Uncaught TypeError: Cannot read properties of undefined (reading ‘refs’)

Getting below error when XYZ component called.

Uncaught TypeError: Cannot read properties of undefined (reading 'refs')
    at detach (chunk-Y4ZIAFTH.js?v=375eb230:3053:3)
    at chunk-Y4ZIAFTH.js?v=375eb230:3125:9
    at chunk-Y4ZIAFTH.js?v=375eb230:3080:9
    at safelyCallDestroy (chunk-E6XXLY7I.js?v=375eb230:16748:13)
    at commitHookEffectListUnmount (chunk-E6XXLY7I.js?v=375eb230:16875:19)
    at commitPassiveUnmountOnFiber (chunk-E6XXLY7I.js?v=375eb230:18232:17)
    at commitPassiveUnmountEffects_complete (chunk-E6XXLY7I.js?v=375eb230:18213:15)
    at commitPassiveUnmountEffects_begin (chunk-E6XXLY7I.js?v=375eb230:18204:15)
    at commitPassiveUnmountEffects (chunk-E6XXLY7I.js?v=375eb230:18169:11)
    at flushPassiveEffectsImpl (chunk-E6XXLY7I.js?v=375eb230:19489:11)
    at flushPassiveEffects (chunk-E6XXLY7I.js?v=375eb230:19447:22)
    at performSyncWorkOnRoot (chunk-E6XXLY7I.js?v=375eb230:18868:11)
    at flushSyncCallbacks (chunk-E6XXLY7I.js?v=375eb230:9119:30)
    at commitRootImpl (chunk-E6XXLY7I.js?v=375eb230:19432:11)
    at commitRoot (chunk-E6XXLY7I.js?v=375eb230:19277:13)
    at finishConcurrentRender (chunk-E6XXLY7I.js?v=375eb230:18805:15)

chunk-E6XXLY7I.js?v=375eb230:14032 The above error occurred in the <XYZ> component:

Electron communication between renderer, preloader and main

I wish to send data from my main.js to the renderer.js, but it doesn’t seem to work.

main.js

function createWindow() {
  const win = new BrowserWindow({
      width: 285  ,
      height: 450,
      autoHideMenuBar: true,
      webPreferences: {
          preload: path.join(__dirname, 'renderer.js'),
          contextIsolation: true,
          enableRemoteModule: false,
          nodeIntegration: true,
          contextIsolation: false,
          'node-integration': false
      }
  });
  win.loadFile('icon-chooser.html');
  getFilesSync().forEach((file) => {
    console.log('sending file:' + file);
    win.webContents.send('add-icon', -1)
  });
  
  
  }
function addIcon() {
  ipcMain.on('add-icon', (event, value) => {
    console.log('received callback');
    console.log(value);
  });
}
    
app.on('ready', () => {
    addIcon();
    createWindow();
});

preload.js

const { contextBridge, ipcRenderer } = require('electron')

contextBridge.exposeInMainWorld('electronAPI', {
  onAddIcon: ipcRenderer.on('add-icon', (_event, value) => {
     console.log('preload received');
  })
})

renderer.js
const {window } = require(‘electron’);

window.electronAPI.onAddIcon((value) => {
    console.log("add new icon file " + value);   
    document.getElementsByClassName('.iconsholder')[0].append(value);
}); 

But the output I get is only that from main: console.log(‘sending file:’ + file);

Output console

What am I doing wrong?

I followed many tutorials. I expected to see console.log(“add new icon file ” + value); from the renderer.js in the console.

why my body covering all the website preventing interactions [closed]

this is an homework, I’m making this website for a comics convenction in my town but when I make hover element or other clickable the effect doesn’t show and in inspect the problem is the body that cover everyting in importance so when I pass the mouse hover i don’t pass over the element but on the body preventing animation and click. I’m using only html,css and js without libraries, please help!

react-draggable issue on mobile devices

I have an issue with react-draggable library, when on mobile or tablet devices onClick from children are not propagated. Anyone knows which might be the reason?

<Draggable onStart={handleDragStart} onDrag={handleDrag} onStop={handleDragStop}>
  <div
    className={!cartWidgetCollapsed ? styles.widgetContainer : styles.widgetContainerCollapsed}>
    <div style={{ width: '100%', height: '100%', pointerEvents: isDragging ? 'none' : 'auto' }}>
      {cartWidgetCollapsed ? (
        <div className={styles.widgetCartContainerCollapsed} onClick={toggleOpenCart}>
          <CartIcon disabled={isDragging} />
          <div className={styles.widgetTogglerExpanded} onClick={toggleCollapse}>
            <img draggable={false} src={expand} alt={ALT_CONSTANTS.ACTION_ICON} />
          </div>
        </div>
      ) : (
        <>
          <div className={styles.widgetTogglerCollapsed} onClick={toggleCollapse}>
            <img draggable={false} src={collapse} alt={ALT_CONSTANTS.ACTION_ICON} />
          </div>
          <div
            className={styles.widgetCartContainer}
            {...(!isDragging && { onClick: toggleOpenCart })}>
            <SuffixSelect
              disabled={isDragging}
              isLoading={isFetchingSites}
              icon={locationPin}
              dropdownStyle={{ visibility: isDragging ? 'hidden' : 'initial' }}
              listOptions={sites}
              value={selectedSite?.code}
              onChange={changeSiteHandler}
              customStyles={{
                borderRadius: '12px',
                marginBottom: '7px',
                overflow: 'hidden',
                borderColor: 'var(--primary-100)',
                backgroundColor: 'var(--primary-100)',
                pointerEvents: isDragging ? 'none' : 'auto',
              }}
            />

            <div className={styles.widgetAmount}>
              <p>{UtilService.numberToDollar(cartAmount)}</p>
              <p className={styles.widgetCart}>
                <Icon isEmbedded disabled={isDragging} />
                <span>{'Click ->'}</span>
              </p>
            </div>
          </div>
        </>
      )}
    </div>
  </div>
</Draggable>

From the code you can see onClick={toggleOpenCart} and onClick={toggleCollapse}. Both are propagated only on PCs.

How to upload file from local path using the utapi uploadFiles

I’m trying to upload files from disk without setting up and endpoint or getting from the client but when I use the utapi functions, they do not work directly with the file path or the buffer gotten with fs.readFile(path).

How do I transform the file so I would be able to upload using utapi.uploadFiles(files).

I’m working with Nextjs 14.

Here is where I instantiated the UTAPI:

const utapi = new UTApi({
    token: process.env.UPLOADTHING_TOKEN!,
});

Here is the function I’m using it from:

export async function UploadPowerpointToUploadThing(
    fileBuffer: Buffer,
    fileName: string
): Promise<UploadFileResult[]> {
    try {
        const file = new File([fileBuffer], fileName, {
            type: "application/vnd.openxmlformats-officedocument.presentationml.presentation",
        });

        console.log("THIS IS UTAPI FILE BEFORE UPLOAD", file);

        const response = await utapi.uploadFiles([file]);

        console.log("THIS IS UTAPI RESPONSE AFTER UPLOAD", response);

        if (!response?.[0].data?.url) {
            throw new Error("Upload failed - No URL returned");
        }

        return response;
    } catch (error) {
        console.error(error);
        throw new Error("Failed to upload powerpoint to uploadthing");
    }
}

Lag when using livefeed using websocket in Medical Dashboard LightningchartJS

I am currently using Medical Dashboard from LightningchartJS which has 4 channels: ECG, Pulse Rate, Respiratory rate, Blood pressure and am using websocket for live feed data into the charts.

But I have observed a delay in receiving the data from socket connection. Typical time difference between two data set received in 1 second and goes up to 5 seconds. This actually displays the charts with lag since both X and Y axis values are pushed only when handleIncomingData() gets executed, which is done on socket.onMessage().

So every second data is received and its plotted one-by-one or step-by-step which gives a sense of lag. If no data is received for 5 seconds (say), chart stops.

Expected behaviour: I want continuous flow of X axis irrespective of Y values, so that its appealing to eyes and when data is received from socket, plot it accordingly as per sampling rate. For the delay in receiving the data, no chart should be plotted, i.e, gaps will be there in chart which I am okay with (for now).

I tried to implement the same but have found unexpected behaviour. I have faced a dead end, please help.

Below is my current implementation:

import {
  emptyFill,
  emptyLine,
  UIOrigins,
  UILayoutBuilders,
  UIElementBuilders,
  AxisTickStrategies,
  AxisScrollStrategies,
  synchronizeAxisIntervals,
} from "@lightningchart/lcjs";
import { useEffect, useRef } from "react";
import { useSelector } from "react-redux";
import { iChannel } from "../../interfaces";
import { WEBSOCKET_URL } from "../../utils/constants";
import { generateRandomID, lc } from "../../utils/helperFunctions";
import "./PatientVitals.css";

const PatientVitals = () => {
  let ecgInput: number[] = [];
  let pulseRateInput: number[] = [];
  let respRateInput: number[] = [];

  const channels: iChannel[] = [
    {
      shortName: "ECG/EKG",
      name: "Electrocardiogram",
      type: "ecg",
      dataSet: [],
      yStart: -50,
      yEnd: 160,
      rate: 256,
    },
    {
      shortName: "Pulse Rate",
      name: "Pleth",
      type: "pulse",
      dataSet: [],
      yStart: -200,
      yEnd: 200,
      rate: 256,
    },
    {
      shortName: "Respiratory rate",
      name: "Resp",
      type: "resp",
      dataSet: [],
      yStart: -150,
      yEnd: 150,
      rate: 128,
    },
    {
      shortName: "NIBP",
      name: "Blood pressure",
      type: "bloodPressure",
      dataSet: [],
      yStart: 50,
      yEnd: 200,
      rate: 256,
    },
  ];

  const TIME_DOMAIN = 10 * 1000;
  const patientDetails = useSelector((state: any) => ({
    patient_uhid: state.patient_uhid,
  }));

  const socketRef = useRef<WebSocket | null>(null);
  const closeWebSocket = () => {
    if (socketRef.current) {
      socketRef.current.close();
      console.log("WebSocket connection closed");
    }
  };

  const createCharts = () => {
    const layoutCharts = document.createElement("div");
    layoutCharts.style.display = "flex";
    layoutCharts.style.flexDirection = "column";

    const chartList = channels?.map((_, i) => {
      const container = document.createElement("div");
      layoutCharts.append(container);
      container.style.height = i === channels?.length - 1 ? "150px" : "220px";
      const chart = lc
        .ChartXY({ container })
        .setPadding({ bottom: 4, top: 4, right: 140, left: 10 })
        .setMouseInteractions(false)
        .setCursorMode(undefined);

      const axisX = chart.getDefaultAxisX().setMouseInteractions(false);
      axisX
        .setTickStrategy(AxisTickStrategies.Time)
        .setInterval({ start: -TIME_DOMAIN, end: 0, stopAxisAfter: false })
        .setScrollStrategy(AxisScrollStrategies.progressive);

      if (i > 0) {
        chart.setTitleFillStyle(emptyFill);
      } else {
        let tFpsStart = window.performance.now();
        let frames = 0;
        let fps = 0;
        const recordFrame = () => {
          frames++;
          const tNow = window.performance.now();
          fps = 1000 / ((tNow - tFpsStart) / frames);
          requestAnimationFrame(recordFrame);

          chart.setTitle(`Medical Dashboard (FPS: ${fps.toFixed(1)})`);
        };
        requestAnimationFrame(recordFrame);
        setInterval(() => {
          tFpsStart = window.performance.now();
          frames = 0;
        }, 5000);
      }

      return chart;
    });

    const uiList = chartList?.map((chart, i) => {
      let labelEcgHeartRate;
      let labelBpmValue;
      let labelBloodPIValue;
      let labelMinMaxBPValue;
      let labelMeanBPValue;
      let labelRespiratoryValue;

      const axisX = chart.getDefaultAxisX();
      const axisY = chart
        .getDefaultAxisY()
        .setMouseInteractions(false)
        .setTickStrategy(AxisTickStrategies.Empty)
        .setStrokeStyle(emptyLine);
      const channel = channels[i];

      const ui = chart
        .addUIElement(UILayoutBuilders.Column, chart.coordsRelative)
        .setBackground((background: any) =>
          background.setFillStyle(emptyFill).setStrokeStyle(emptyLine)
        )
        .setMouseInteractions(false)
        .setVisible(false);

      ui.addElement(UIElementBuilders.TextBox).setText(channel.shortName);
      ui.addElement(UIElementBuilders.TextBox)
        .setText(channel.name)
        .setTextFont((font) => font.setSize(10));

      if (i !== channels.length - 1) {
        ui.addElement(UIElementBuilders.TextBox)
          .setText(`${channel.rate} samples/second`)
          .setTextFont((font) => font.setSize(10));
      }
      if (channel.name === "Electrocardiogram") {
        labelEcgHeartRate = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(36))
          .setMargin({ top: 10 });
      }
      if (channel.name === "Pleth") {
        ui.addElement(UIElementBuilders.TextBox)
          .setMargin({ top: 10 })
          .setText("SPO2");
        labelBpmValue = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(36));
        labelBloodPIValue = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(12));
      }
      if (channel.name === "Blood pressure") {
        labelMinMaxBPValue = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(36));
        labelMeanBPValue = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(36));
      }
      if (channel.name === "Resp") {
        labelRespiratoryValue = ui
          .addElement(UIElementBuilders.TextBox)
          .setText("")
          .setTextFont((font: any) => font.setSize(36))
          .setMargin({ top: 10 });
      }

      const positionUI = () => {
        ui.setVisible(true)
          .setPosition(
            chart.translateCoordinate(
              { x: axisX.getInterval().end, y: axisY.getInterval().end },
              { x: axisX, y: axisY },
              chart.coordsRelative
            )
          )
          .setOrigin(UIOrigins.LeftTop);
        requestAnimationFrame(positionUI);
      };
      requestAnimationFrame(positionUI);

      return {
        labelEcgHeartRate,
        labelBpmValue,
        labelBloodPIValue,
        labelMinMaxBPValue,
        labelMeanBPValue,
        labelRespiratoryValue,
      };
    });

    synchronizeAxisIntervals(
      ...chartList.map((chart) => chart.getDefaultAxisX())
    );

    const seriesList = chartList.map((chart, i) => {
      const series = chart
        .addPointLineAreaSeries({
          dataPattern: "ProgressiveX",
          automaticColorIndex: Math.max(i - 1, 0),
          yAxis: chart.getDefaultAxisY(),
        })
        .setAreaFillStyle(emptyFill)
        .setMaxSampleCount(100_000);
      return series;
    });

    const handleIncomingData = (data: number[][]) => {
      data?.forEach((dataCh, index) => {
        const ch = seriesList[index];
        ch.appendSamples({
          yValues: dataCh,
          step: 1000 / channels[index].rate,
        });
      });
    };

    createSocketConnection(handleIncomingData, channels, uiList);

    const vitalGraphsContainer = document.getElementById("vitalGraphs");
    vitalGraphsContainer?.replaceChildren(layoutCharts);
  };

  function createSocketConnection(handleIncomingData, channels, uiList) {
    const randomID = generateRandomID(4);
    const socket = new WebSocket(
      `${WEBSOCKET_URL}`
    );
    socketRef.current = socket;

    socket.onopen = function (event) {
      console.log("WebSocket connection opened", event);
    };

    socket.onmessage = function (event) {
      const message = JSON.parse(event.data);
      console.log(message, new Date());
      ecgInput = message?.ecg
        ?.split("^")
        ?.filter((item) => item < 1000)
        ?.map(Number);
      pulseRateInput = message?.pulseRate
        ?.split("^")
        ?.filter((item) => item < 1000)
        ?.map(Number);
      respRateInput = message?.respiratoryGraph
        ?.split("^")
        ?.filter((item) => item < 1000)
        ?.map(Number);
      let ecgHeartRate: string = message?.ecgHeartRate;
      let pusleRateValue: string = message?.pulseRateValue;
      let systolicBpValue: string = message?.systolicBpValue;
      let diastolicBpValue: string = message?.diastolicBpValue;
      let meanBpValue: string = message?.meanBpValue;
      let spo2: string = message?.spo2;
      let respiratoryValue: string = message?.respiratoryValue;
      let bloodPerforationIndex: string = message?.bloodPerforationIndex;

      uiList?.forEach((ui) => {
        if (ui.labelEcgHeartRate) {
          const ecgOrPulseRate = ecgHeartRate || pusleRateValue;
          if (ecgOrPulseRate) {
            ui.labelEcgHeartRate.setText(ecgOrPulseRate.toString());
          }
        }
        if (ui.labelBpmValue) {
          if (spo2) {
            ui.labelBpmValue.setText(spo2?.toString());
          }
          if (bloodPerforationIndex) {
            ui.labelBloodPIValue.setText(
              "     PI: " + bloodPerforationIndex?.toString()
            );
          }
        }
        if (ui.labelMinMaxBPValue && systolicBpValue && diastolicBpValue) {
          ui.labelMinMaxBPValue.setText(
            systolicBpValue?.toString() + "/" + diastolicBpValue?.toString()
          );
        }
        if (ui.labelMeanBPValue && meanBpValue) {
          ui.labelMeanBPValue.setText("     (" + meanBpValue?.toString() + ")");
        }
        if (ui.labelRespiratoryValue && respiratoryValue) {
          ui.labelRespiratoryValue.setText(respiratoryValue?.toString());
        }
      });

      const chart_Inputs = [ecgInput, pulseRateInput, respRateInput];
      handleIncomingData(channels?.map((_, index) => chart_Inputs[index]));
    };

    socket.onerror = function (event) {
      console.log("WebSocket error observed:", event);
    };

    socket.onclose = function (event) {
      console.log("Websocket closure code:", event.code);
      if (event.code !== 1000 && event.code !== 1001) {
        console.log(
          "Websocket closed abnormally. Reconnecting to WebSocket server..."
        );
        createSocketConnection(handleIncomingData, channels, uiList);
      }
    };
  }

  useEffect(() => {
    createCharts();
    return () => {
      closeWebSocket();
    };
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [patientDetails.patient_uhid]);

  return <div id="vitalGraphs"></div>;
};

export default PatientVitals;

Version used: "@lightningchart/lcjs": "^6.0.3"

What I tried:
I tried bumping up X axis by default using setInterval, and plot y values when received data from socket. And it worked, but when stopped receiving data from socket, chart stops and X axis continues to flow. Now lets say 5 seconds later, again I started receiving the data, the chart starts plotting from the point it stopped. This is actually an issue. Lets say I start receiving the data after 2 mins or 10 mins, so it would start plotting from the point it ended, but that particular timestamp has already passed and is out of the view since X axis keeps on moving. So essentially no chart from an end user point of view.

Continuation to above:
I also tried to stop the X axis when stopped receiving the data (actually incrementing with at a very slow rate, so it looks that it stopped), but in this case chart synchronisation between multiple devices is hampered i.e, charts for a particular patient is not same in multiple devices.

Issue in Audio streaming with Socket IO Flask Application

Below is my user1_interface.html/user2_interface.html code in which i am able to hear the audio but the issue is the audio button is dependent on video. If video is turned on then only i can turn on the Audio.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>{{ name | capitalize }}</title>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.4.1/socket.io.min.js"></script>
    <style>
        .video-button {
            position: absolute;
            top: 20px;
            right: 110px;
            background-color: transparent;
            border: none;
            cursor: pointer;
        }

        .video-button img {
            width: 40px;
            height: 40px;
        }

        .remote-video-style {
            position: fixed;
            bottom: -12px;
            right: 20px;
            width: 180px;
            height: 180px;
            z-index: 1000;
        }

        .mute-button {
            position: absolute;
            top: 20px;
            right: 160px;
            background-color: transparent;
            border: none;
            cursor: pointer;
        }

        .mute-button img {
            width: 35px;
            height: 35px;
        }
    </style>
</head>
<body>
    <h2>{{ name | capitalize }}</h2>

    <video id="remoteVideo" autoplay playsinline></video>
    
    <button id="cameraButton" onclick="toggleCamera()" class="video-button">
        <img id="camera-icon" src="{{ url_for('static', filename='vidoff.png') }}" alt="Camera On"
        data-show="{{ url_for('static', filename='vidon.png') }}"
        data-hide="{{ url_for('static', filename='vidoff.png') }}">
    </button>

    <button id="mute-button" class="mute-button">
        <img id="mute-icon" src="{{ url_for('static', filename='mute.png') }}" alt="Mute" 
        data-show="{{ url_for('static', filename='unmute.png') }}"
        data-hide="{{ url_for('static', filename='mute.png') }}">
    </button>

    <script>
        const socket = io();
        const remoteVideo = document.getElementById("remoteVideo");
        const cameraButton = document.getElementById("cameraButton");
        const cameraIcon = document.getElementById("camera-icon");
        const muteButton = document.getElementById("mute-button");
        const muteIcon = document.getElementById("mute-icon");
        let localStream = null;
        let peerConnection = null;
        let isCameraOn = false;
        let isMuted = true;  // Initially muted

        async function toggleCamera() {
            if (isCameraOn) {
                stopCamera();
            } else {
                startCamera();
            }
        }

        async function startCamera() {
            try {
                // Access both video and audio
                localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
                cameraIcon.src = cameraIcon.getAttribute('data-show');
                isCameraOn = true;

                createPeerConnection();
                localStream.getTracks().forEach(track => peerConnection.addTrack(track, localStream));

                // Initially mute audio
                if (localStream.getAudioTracks().length > 0) {
                    localStream.getAudioTracks()[0].enabled = !isMuted;
                }

                // Create an offer and send it to the other user
                const offer = await peerConnection.createOffer();
                await peerConnection.setLocalDescription(offer);
                socket.emit('offer', { type: 'offer', sdp: offer.sdp });
            } catch (error) {
                console.error("Error accessing camera and microphone:", error);
            }
        }

        function stopCamera() {
            if (localStream) {
                localStream.getTracks().forEach(track => track.stop());
                localStream = null;
            }
            if (peerConnection) {
                peerConnection.close();
                peerConnection = null;
            }

            cameraIcon.src = cameraIcon.getAttribute('data-hide');
            isCameraOn = false;
            remoteVideo.srcObject = null;
            remoteVideo.classList.remove("remote-video-style");
            socket.emit('offer', { type: 'offer', sdp: null });
        }

        function createPeerConnection() {
            peerConnection = new RTCPeerConnection();

            // Handle incoming remote track
            peerConnection.ontrack = (event) => {
                if (event.streams && event.streams[0]) {
                    remoteVideo.srcObject = event.streams[0];
                    console.log("Received remote stream:", event.streams[0]);
                } else {
                    console.warn("No streams in ontrack event.");
                }
                remoteVideo.classList.add("remote-video-style");
            };

            // Handle ICE candidates
            peerConnection.onicecandidate = (event) => {
                if (event.candidate) {
                    socket.emit('candidate', { candidate: event.candidate });
                }
            };
        }

        // Function to toggle Mute/Unmute
        muteButton.addEventListener("click", () => {
            if (localStream && localStream.getAudioTracks().length > 0) {
                isMuted = !isMuted;
                muteIcon.src = isMuted ? muteIcon.getAttribute('data-hide') : muteIcon.getAttribute('data-show');
                localStream.getAudioTracks()[0].enabled = !isMuted;
                
                console.log("Audio muted:", isMuted);
                
                // Notify the other peer about mute/unmute status
                socket.emit('audio-mute', { isMuted });
            }
        });

        // Socket event listeners for signaling
        socket.on("offer", async (data) => {
            if (data.sdp) {
                if (!peerConnection) createPeerConnection();
                await peerConnection.setRemoteDescription(new RTCSessionDescription({ type: "offer", sdp: data.sdp }));
                const answer = await peerConnection.createAnswer();
                await peerConnection.setLocalDescription(answer);
                socket.emit("answer", { type: "answer", sdp: answer.sdp });
            } else {
                if (peerConnection) {
                    peerConnection.close();
                    peerConnection = null;
                }
                remoteVideo.srcObject = null;
                remoteVideo.classList.remove("remote-video-style");
            }
        });

        socket.on("answer", async (data) => {
            if (peerConnection) {
                await peerConnection.setRemoteDescription(new RTCSessionDescription({ type: "answer", sdp: data.sdp }));
            }
        });

        socket.on("candidate", async (data) => {
            if (peerConnection && data.candidate) {
                await peerConnection.addIceCandidate(new RTCIceCandidate(data.candidate));
            }
        });

        // Handle mute/unmute for remote audio
        socket.on("audio-mute", (data) => {
            if (remoteVideo.srcObject && remoteVideo.srcObject.getAudioTracks().length > 0) {
                remoteVideo.srcObject.getAudioTracks()[0].enabled = !data.isMuted;
                console.log("Remote audio muted:", data.isMuted);
            }
        });
    </script>
</body>
</html>

Now i have modified the user1_interface.html/user2_interface.html code, and make the audio independent of video but now i am unable to hear the audio. Below is the code snippet with independent audio feature.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>{{ name | capitalize }}</title>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.4.1/socket.io.min.js"></script>
    <style>
        .video-button {
            position: absolute;
            top: 20px;
            right: 110px;
            background-color: transparent;
            border: none;
            cursor: pointer;
        }

        .video-button img {
            width: 40px;
            height: 40px;
        }

        .remote-video-style {
            position: fixed;
            bottom: -12px;
            right: 20px;
            width: 180px;
            height: 180px;
            z-index: 1000;
        }

        .mute-button {
            position: absolute;
            top: 20px;
            right: 160px;
            background-color: transparent;
            border: none;
            cursor: pointer;
        }

        .mute-button img {
            width: 35px;
            height: 35px;
        }
    </style>
</head>
<body>
    <h2>{{ name | capitalize }}</h2>

    <video id="remoteVideo" autoplay playsinline></video>
    
    <button id="cameraButton" onclick="toggleCamera()" class="video-button">
        <img id="camera-icon" src="{{ url_for('static', filename='vidoff.png') }}" alt="Camera On"
        data-show="{{ url_for('static', filename='vidon.png') }}"
        data-hide="{{ url_for('static', filename='vidoff.png') }}">
    </button>

    <button id="mute-button" class="mute-button">
        <img id="mute-icon" src="{{ url_for('static', filename='mute.png') }}" alt="Mute" 
        data-show="{{ url_for('static', filename='unmute.png') }}"
        data-hide="{{ url_for('static', filename='mute.png') }}">
    </button>

    <script>
        const socket = io();
        const remoteVideo = document.getElementById("remoteVideo");
        const cameraButton = document.getElementById("cameraButton");
        const cameraIcon = document.getElementById("camera-icon");
        const muteButton = document.getElementById("mute-button");
        const muteIcon = document.getElementById("mute-icon");
        
        let localStream = null;
        let audioStream = null;  // Separate audio stream
        let peerConnection = null;
        let isCameraOn = false;
        let isMuted = true;  // Initially muted

        // Function to initialize audio stream (mic only)
        async function initAudioStream() {
            try {
                audioStream = await navigator.mediaDevices.getUserMedia({ audio: true });
                audioStream.getAudioTracks()[0].enabled = !isMuted;  // Set initial mute state
                console.log("Audio stream initialized:", audioStream);
                // Add audio track to the peer connection if available
                if (peerConnection && audioStream) {
                    audioStream.getTracks().forEach(track => peerConnection.addTrack(track, audioStream));
                }
            } catch (error) {
                console.error("Error accessing microphone:", error);
            }
        }

        // Function to toggle Mute/Unmute
        muteButton.addEventListener("click", () => {
            if (!audioStream) {
                // Initialize audio stream if not already done
                initAudioStream().then(() => {
                    toggleAudio();
                });
            } else {
                toggleAudio();
            }
        });

        function toggleAudio() {
            isMuted = !isMuted;
            muteIcon.src = isMuted ? muteIcon.getAttribute('data-hide') : muteIcon.getAttribute('data-show');
            if (audioStream && audioStream.getAudioTracks().length > 0) {
                audioStream.getAudioTracks()[0].enabled = !isMuted;
                console.log("Audio muted:", isMuted);
                socket.emit('audio-mute', { isMuted });
            }
        }

        // Function to stop the audio stream completely
        function stopAudioStream() {
            if (audioStream) {
                audioStream.getTracks().forEach(track => track.stop());
                audioStream = null;
            }
        }

        // Function to toggle camera on/off
        async function toggleCamera() {
            if (isCameraOn) {
                stopCamera();
            } else {
                startCamera();
            }
        }

        async function startCamera() {
            try {
                // Access video (audio already accessed separately in initAudioStream)
                localStream = await navigator.mediaDevices.getUserMedia({ video: true });
                cameraIcon.src = cameraIcon.getAttribute('data-show');
                isCameraOn = true;

                createPeerConnection();

                // Add each video track to the peer connection
                localStream.getTracks().forEach(track => peerConnection.addTrack(track, localStream));

                // Send an offer to the other peer
                const offer = await peerConnection.createOffer();
                await peerConnection.setLocalDescription(offer);
                socket.emit('offer', { type: 'offer', sdp: offer.sdp });
            } catch (error) {
                console.error("Error accessing camera:", error);
            }
        }

        function stopCamera() {
            if (localStream) {
                localStream.getTracks().forEach(track => track.stop());
                localStream = null;
            }
            if (peerConnection) {
                peerConnection.close();
                peerConnection = null;
            }

            cameraIcon.src = cameraIcon.getAttribute('data-hide');
            isCameraOn = false;
            remoteVideo.srcObject = null;
            remoteVideo.classList.remove("remote-video-style");
            socket.emit('offer', { type: 'offer', sdp: null });
        }

        function createPeerConnection() {
            peerConnection = new RTCPeerConnection();

            // Handle incoming remote track
            peerConnection.ontrack = (event) => {
                if (event.streams && event.streams[0]) {
                    remoteVideo.srcObject = event.streams[0];
                    console.log("Received remote stream:", event.streams[0]);
                } else {
                    console.warn("No streams in ontrack event.");
                }
                remoteVideo.classList.add("remote-video-style");
            };

            // Handle ICE candidates
            peerConnection.onicecandidate = (event) => {
                if (event.candidate) {
                    socket.emit('candidate', { candidate: event.candidate });
                }
            };

            // Add audio stream independently of video
            if (audioStream) {
                audioStream.getTracks().forEach(track => peerConnection.addTrack(track, audioStream));
            }
        }

        // Socket event listeners for signaling
        socket.on("offer", async (data) => {
            if (data.sdp) {
                if (!peerConnection) createPeerConnection();
                await peerConnection.setRemoteDescription(new RTCSessionDescription({ type: "offer", sdp: data.sdp }));
                const answer = await peerConnection.createAnswer();
                await peerConnection.setLocalDescription(answer);
                socket.emit("answer", { type: "answer", sdp: answer.sdp });
            } else {
                if (peerConnection) {
                    peerConnection.close();
                    peerConnection = null;
                }
                remoteVideo.srcObject = null;
                remoteVideo.classList.remove("remote-video-style");
            }
        });

        socket.on("answer", async (data) => {
            if (peerConnection) {
                await peerConnection.setRemoteDescription(new RTCSessionDescription({ type: "answer", sdp: data.sdp }));
            }
        });

        socket.on("candidate", async (data) => {
            if (peerConnection && data.candidate) {
                await peerConnection.addIceCandidate(new RTCIceCandidate(data.candidate));
            }
        });

        // Handle mute/unmute for remote audio
        socket.on("audio-mute", (data) => {
            if (remoteVideo.srcObject && remoteVideo.srcObject.getAudioTracks().length > 0) {
                remoteVideo.srcObject.getAudioTracks()[0].enabled = !data.isMuted;
                console.log("Remote audio muted:", data.isMuted);
            }
        });
    </script>
</body>
</html>

Below is the app.py code i am using –

from flask import Flask, render_template, request, redirect, url_for, abort
from flask_socketio import SocketIO, emit

app = Flask(__name__)
socketio = SocketIO(app)

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/candidate', methods = ['GET'])
def candidateLogin():
    return render_template('user1.html')

@app.route('/interviewer', methods = ['GET'])
def interviewerLogin():
    return render_template('user2.html')

@app.route('/candidate_interface')
def candidateInterface():
    name = request.args.get('name')
    return render_template('user1_interface.html')

@app.route('/interviewer_interface')
def interviewerInterface():
    name = request.args.get('name')
    return render_template('user2_interface.html')

@app.route('/candidate_signin', methods = ['POST'])
def candidateSignin():
    name = request.args.get('name')
    print(name)
    return redirect(url_for('candidateInterface'))

@app.route('/interviewe_signin', methods = ['POST'])
def intervieweSignin():
    name = request.args.get('name')
    print(name)
    return redirect(url_for('interviewerInterface'))

@socketio.on('offer')
def handle_offer(data):
    print("offer: ", data, 'n')
    emit('offer', data, broadcast=True, include_self=False)

@socketio.on('answer')
def handle_answer(data):
    print("answer: ", data, 'n')
    emit('answer', data, broadcast=True, include_self=False)

@socketio.on('candidate')
def handle_candidate(data):
    print("candidate: ", data, 'n')
    emit('candidate', data, broadcast=True, include_self=False)

@socketio.on('audio-mute')
def handle_audio_mute(data):
    print("audio-mute:", data, 'n')
    emit('audio-mute', data, broadcast=True, include_self=False)

if __name__ == '__main__':
    socketio.run(app, debug=True)

As i am very much new to this so I am unable to understand where i am going wrong. Thanks in advance for any suggestion.

I have tried

Is this a good idea to use NodeJs’s File System for caching in NextJs or ExpressJs?

To cache the api response I wrote these simple functions that uses NodeJs’s file system api to write and read response from specific json file as a caching layer.

const fs = require("fs");
const path = require("path");
const fileName = path.join(__dirname, "CACHE.json");

const getFileName = (key = "") => {
  key = key.replace(/W/g, "");
  return path.join(__dirname, `CACHE${key}.json`);
};

const setCache = (key, value, ttlInSeconds = 60, cachePath) => {
  try {
    let cache = {};
    const fileName = getFileName(cachePath || key);
    if (fs.existsSync(fileName)) {
      cache = JSON.parse(fs.readFileSync(fileName));
    }
    cache[key] = {
      value,
      ttl: Date.now() + ttlInSeconds * 1000,
    };
    fs.writeFileSync(fileName, JSON.stringify(cache, null, 2));
  } catch (error) {
    console.log(error);
  }
};

const getCache = (key, cachePath) => {
  try {
    const fileName = getFileName(cachePath || key);
    if (!fs.existsSync(fileName)) {
      return null;
    }
    const cache = JSON.parse(fs.readFileSync(fileName));
    if (!cache[key]) {
      return null;
    }
    if (cache[key].ttl < Date.now()) {
      delete cache[key];
      fs.writeFileSync(fileName, JSON.stringify(cache, null, 2));
      return null;
    }
    return cache[key].value;
  } catch (error) {
    return null;
  }
};

const clearCache = (key, cachePath) => {
  try {
    const fileName = getFileName(cachePath || key);
    const cache = JSON.parse(fs.readFileSync(fileName));
    delete cache[key];
    fs.writeFileSync(fileName, JSON.stringify(cache, null, 2));
  } catch (error) {}
};

module.exports = {
  setCache,
  getCache,
  clearCache,
};

I stress tested the api using Postman run. I’m satisfied with the result.

My question is is this a good approach?.

Note: I don’t want to use Redis at this moment.

enter image description here

clickup clone text editor & jira text editor && Dynamically JSON and show and hide object ( H1 ,H2 … all)?

clickup clone text editor & best text editor & lexical editor & jira text editor

Use the toolbar to adjust your text formatting:

Turn into: Convert the selected text into headings, banners, a code block, or a quote block.

image info

Rich text: Bold, italics, underline, strikethrough, or inline code formatting.

Text colors and Text highlights: Select from a range of vibrant text colors.

image info

Badges: Insert a colorful badge to emphasize or call attention to a line or block of content. You can also add some rich text formatting to the text in the badge like you can in a banner.

Alignment: Indent and set text to left, center, or right justified. image info

Bulleted List: Format text into a bulleted list. All Lists: Click the caret icon next to the Bulleted List icon to format text into a Numbered List or Toggled List.

Check List: Format text into a check list.

Insert a link: Insert a hyperlink.

Create subpage: Create a series of subtopics that are part of the main Doc.

Create comment: Add comments about the Doc to the right sidebar of the Doc. Text that has comments is highlighted.

Undo: Undo your last action.

Redo: Redo your last action.

image info image info image info image info image info

/Slash Commands
Use /Slash Commands, our custom shortcuts that quickly add rich text, attach images, move a task, change a due date, and more!

enter image description here

enter image description here

[[enter image description here](https://i.sstatic.net/z1XxJtj5.png)](https://i.sstatic.net/xF3YK04i.png)

https://www.npmjs.com/package/@drcpythonmfe/lexical-playground?activeTab=dependents

html

@drcpythonmfe/lexical-playground
0.4.3 • Public