How to upload and display blob image as an object url?

Using Node Js Multer I am uploading an image to the database as a blob. This happens from javascript ajax xmlhttp request to express js endpoint.

index.html uploading image request

var data, xhr;
    data = new FormData();
    data.append('imageProfile',image);
    xhr = new XMLHttpRequest();

    xhr.open('POST', 'http://localhost:3000/upload', true);
    xhr.onreadystatechange = function (response) {
      //  document.getElementById("result").innerHTML = xhr.responseText
    };
    xhr.send(data);

express js image database upload

router.post('/upload', upload.single('imageProfile'),function(req,res){
const imageProfile = req.file;
var image=imageProfile;
var sql='Insert into Uploads (id,image) VALUES("2",cast("'+image+'" AS BINARY));';
connection.query(sql, function (err, data) {
  if (err) {
      // some error occured
      console.log("database error-----------"+err);
  } else {
      // successfully inserted into db
      console.log("database insert sucessfull-----------");
    }
  });
});

so according to my knowledge image gets uploaded as blob to mysql database successfully.

now the problem is fetching and viewing the image from database.

express js fetch image from database

router.get('/getimage',function(req,res){

 var sql = 'SELECT image FROM Uploads';

connection.query(sql, function (err, data) {
  if (err) {
      // some error occured
      console.log("database error-----------"+err);
  } else {
      // successfully inserted into db
      console.log("database select sucessfull-----------"+data.length);
       res.json({'image':data[0].image});
  }

  });
 });

index.html show database from express js endpoint as a object url

$.get("http://localhost:3000/getimage",function(data,status){

        console.log("data---"+JSON.stringify(data));
        let url = URL.createObjectURL( new Blob([data["image"]], { type: 'image/jpeg' 
   }))
        imgtag.src=url;

    });

json response of image blob

{"image":{"type":"Buffer","data": 
[91,111,98,106,101,99,116,32,79,98,106,101,99,116,93]}}

This image is not getting displayed as a ObjectURL in index.html. Anything wrong that is done here?

The issue i am facing is label and node overlap, especially when i make a loopback (self-connection) to the same node

I am using jsPlumb v2.1.2 (Community Edition) along with farahey.js for flow layout in my project.
Since the project is already live, I cannot change the version.

The issue I am facing is:
When I create a loopback connection (connect a node to itself), the labels and nodes overlap.
I want to make sure that there is no overlap between labels and nodes, especially in self-connections.
I need the fix The self-connection should be displayed clearly without any clutter.

Thanks in advance.Self connection nodes
Labels overlap issue

WasmBackendModuleThreadedSimd Error When Using Webpack-Bundled SDK with TensorFlow.js and @vladmandic/human

I’ve created a JavaScript SDK that uses the @vladmandic/human library and TensorFlow.js for ML operations. I bundle the SDK using Webpack v5.94.

Everything works fine when I use the source code directly in my Next.js app. However, when I install the SDK from npm (i.e., the bundled output), I encounter the following error at runtime:

Uncaught ReferenceError: n is not defined at WasmBackendModuleThreadedSimd at self.onmessage

I’m using only one WASM file from MediaPipe via a CDN link, and my Webpack config looks like this:

import path from 'path';
import { fileURLToPath } from 'url';

const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

export default {
  experiments: {
    outputModule: true
  },
  entry: './abc.js',
  output: {
    filename: 'abc.js',
    path: path.resolve(__dirname, 'dist'),
    library: {
      type: 'module'
    },
  },
  module: {
    rules: [
      {
        test: /.jsx?$/,
        exclude: /node_modules/,
        use: {
          loader: 'babel-loader',
          options: {
            presets: ['@babel/preset-env', '@babel/preset-react'],
          },
        },
      },
      {
        test: /.css$/,
        use: [
          'style-loader',
          'css-loader'
        ],
      },
    ],
  },
  resolve: {
    extensions: ['.js', '.jsx'],
  },
  mode: 'production',
};

What I’ve tried:

  • Using only source code directly – works fine.

  • Using the dist/abc.js bundle from npm – throws the error.

  • Ensured the WASM file is correctly fetched from the CDN.

How can I bundle my SDK with @vladmandic/human and tensorflow-js with webpack so that it can work properly? Thanks in advance!

String.prototype.replaceAll() replace twice

  parseAgentFunction(prompt: any, mindernode: any): string {
    prompt = '这个是啥$querySeedTextCaseFromDb()'
    if (prompt.includes("$querySeedTextCaseFromDb")) {
      const regex = /$querySeedTextCaseFromDb((.*?))/g;
      const checkedNodesForQuerySeedTextCaseFromDb = '$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)'
      prompt = prompt.replaceAll(
        regex,
        checkedNodesForQuerySeedTextCaseFromDb
      );
    }
    return prompt;
  }

The result is '这个是啥$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)', which is supposed to be '这个是啥$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)'

When I changed replaceAll to replace, result becomes to '这个是啥$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)', which is suppoesd to be right.

function parseAgentFunction(prompt, mindernode) {
  prompt = '这个是啥$querySeedTextCaseFromDb()'
  if (prompt.includes("$querySeedTextCaseFromDb")) {
    const regex = /$querySeedTextCaseFromDb((.*?))/g;
    const checkedNodesForQuerySeedTextCaseFromDb = '$querySeedTextCaseFromDb(测试老的关联方式有没有坏,background占位这种)'
    prompt = prompt.replaceAll(
      regex,
      checkedNodesForQuerySeedTextCaseFromDb
    );
  }
  return prompt;
}

console.log(parseAgentFunction());

Why are fallthrough attributes applied to both the root element and the one you specify with v-bind=”$attrs”?

I’ve been trying to use Vue’s Fallthrough Attributes to pass an event listener down to a button with v-bind="$attrs". After hours of debugging I found out that the event listener is registered twice: on the root element and on the with v-bind. Here’s the minimal repro:

<script setup lang="ts">
import Comp from './Comp.vue'
</script>

<template>
  <Comp @click="console.log('click')" />
</template>
<script setup lang="ts"></script>

<template>
  <div style="padding:20px;">
    triggers event once
    <button v-bind="$attrs">triggers event twice</button>
  </div>
</template>

I’ve also made a more in-depth demo on Vue Playground.

Apparently, Vue passes down the Fallthrough Attributes both to the root element of the component and the one specified with v-bind="$attrs". So, if you pass an id, tailwind class, anything – it will be duplicated.

Is there any reason for this behavior? From my couple dozen hours of experience with Vue it seems like a bug or a really annoying feature.

It works exactly like I’d expect with multi-root templates, so why not single-root?

How to reproduce potential race condition?

I have a crawler which calls an http endpoint every 5s, The endpoint does the code below:

const result = Array.from(this.registry)
     .map(([, metric]) => metric.format())
     .join('n');
this.registry = new Map();
return result;

And somewhere else in my app, I’m adding to the map. I think between reading and mapping from the Map, Items can be added to map so after doing new Map() those items will be gone forever!
I’m trying to write a code that reproduces this problem (Without touching the controller of course).

I’ve tried setting an interval for every 1ms which adds to map, Also with another app call the API every 5s. No matter what I do, I can’t reproduce this.

How can I dynamically calculate golf handicap in JavaScript for a WordPress plugin?

I’m building a front-end golf handicap calculator for my WordPress site using JavaScript. The goal is to let users input their scores, course rating, and slope rating, and then calculate their handicap index based on the USGA formula.

I understand the basic formula for handicap differential:

Handicap Differential = (Adjusted Gross Score – Course Rating) × 113 / Slope Rating

I’ve written this basic function in JavaScript to calculate the differentials:

function calculateHandicap(scores) {
    const differentials = scores.map(score => {
        return ((score.gross - score.rating) * 113) / score.slope;
    });
    // Not sure how to select the best N differentials
}

What I’m struggling with:

  • How to correctly sort the differentials and pick the lowest N values (e.g., lowest 8 out of 20)?

  • Should each differential be rounded to one decimal before or after averaging?

  • Is there a clean way to implement this in vanilla JS that would work well with a WordPress shortcode or embedded form?

I expected to get a working handicap index displayed after the user submits the form, but I’m unsure about the data processing logic and best practices for client-side integration. Any help with the calculation logic or WordPress integration is appreciated!

Typescript error because nested value might be null even after I check it’s not null

I am having a problem with Drizzle response and typescript.
Drizzle joins return an object with nested object | null, and I couldn’t find a “typescript” only way to resolve it without doing extra unnecessary steps that change the code just to make typescript happy.

The following code simulate the type issue:

interface Author {
    id: string;
}

interface Post {
    id: string;
    author: Author;
}

interface PostWithNull {
    id: string;
    author: Author | null;
}

const mixedData = [
    {
        id: '123',
        author: null,
    },
    {
        id: '234',
        author: {
            id: '1'
        },
    }
];

function getPostById(id: string): Post | null {
    // Simulate Drizzle response type -> DO NOT CHANGE
    const res = mixedData.find((record) => record.id === id) as PostWithNull;
    if (!res) {
        return null;
    }

    // The problematic part
    if (res.author) {
        return res;
        // The code below will work.
        // return {
        //     ...res,
        //     author: res.author
        // }
    }


    return null;
}

Removing the “as” will resolve the issue, but that’s not the point, as this is just to simulate the response I am getting, I have no control over it.
I can copy the object like I did there, but it would be an extra step that I am doing only for Typescript compiler, and I’d rather avoid.

I am looking for a Typescript solution to this problem.

Speech to text not working in ios devices

I don’t have that much experience working with javascript.
I have to do a task in which I have to perform both STT and TTS. I’m using javascript library speech syenthesis uterence, this library works well while testing with android device but speech to text conversion fails at the time of using it with ios device.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Voice Command</title>
    <style>
        .chat-container {
            max-width: 400px;
            margin: 20px auto;
            padding: 10px;
            border: 1px solid #ccc;
            border-radius: 5px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            font-family: Arial, sans-serif;
        }
        .user-message {
            background-color: #f0f0f0;
            border-radius: 5px;
            padding: 5px 10px;
            margin: 5px 0;
            text-align: right;
        }
        .bot-message {
            background-color: #d3e9ff;
            border-radius: 5px;
            padding: 5px 10px;
            margin: 5px 0;
        }
        #languageSelector {
            width: 100%;
            margin-top: 10px;
            padding: 5px;
            border-radius: 5px;
            border: 1px solid #ccc;
        }
        #status {
            color: grey;
            font-weight: 600;
            margin-top: 10px;
            text-align: center;
        }
        #permissionModal {
            position: fixed;
            top: 0;
            left: 0;
            width: 100%;
            height: 100%;
            background: rgba(0, 0, 0, 0.5);
            display: flex;
            justify-content: center;
            align-items: center;
            z-index: 1000;
        }
        #permissionModal div {
            background: white;
            padding: 20px;
            border-radius: 5px;
            text-align: center;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.3);
        }
        #permissionModal button {
            margin: 10px;
            padding: 10px 20px;
            border: none;
            border-radius: 5px;
            background: #007bff;
            color: white;
            cursor: pointer;
        }
        #permissionModal button:hover {
            background: #0056b3;
        }
        #testSpeakerButton {
            display: block;
            margin: 10px auto;
            padding: 10px 20px;
            border: none;
            border-radius: 5px;
            background: #28a745;
            color: white;
            cursor: pointer;
            font-family: Arial, sans-serif;
            font-weight: 600;
        }
        #testSpeakerButton:hover {
            background: #218838;
        }
    </style>
</head>
<body>
    <div id="permissionModal" style="display: none;">
        <div>
            <p>This site requires microphone and speaker permissions to enable voice input and output.</p>
            <button id="grantPermissions">Grant Permissions</button>
        </div>
    </div>
    <button id="testSpeakerButton">Speaker test</button>
    <div class="chat-container">
        <div id="chat-box"></div>
        <select id="languageSelector">
            <option value="English (US)">English (US)</option>
            <option value="Hindi (India)">Hindi (India)</option>
            <option value="Spanish (Spain)">Spanish (Spain)</option>
            <option value="French (France)">French (France)</option>
            <option value="German (Germany)">German (Germany)</option>
            <option value="Arabic (Saudi Arabia)">Arabic (Saudi Arabia)</option>
        </select>
        <div class="speaker" style="display: flex; justify-content: space-between; width: 100%; box-shadow: 0 0 13px #0000003d; border-radius: 5px; margin-top: 10px;">
            <p id="action" style="color: grey; font-weight: 800; padding: 0; padding-left: 2rem;"></p>
            <button id="speech" style="border: transparent; padding: 0 0.5rem;">
                Tap to Speak
            </button>
        </div>
        <p id="status"></p>
    </div>

    <script>
        // Browser detection
        function detectBrowser() {
            const ua = navigator.userAgent.toLowerCase();
            if (ua.includes('safari') && !ua.includes('chrome')) return 'Safari';
            if (ua.includes('firefox')) return 'Firefox';
            if (ua.includes('edg')) return 'Edge';
            if (ua.includes('chrome')) return 'Chrome';
            return 'Unknown';
        }

        const browser = detectBrowser();
        const statusBar = document.getElementById('status');
        const permissionModal = document.getElementById('permissionModal');
        const grantPermissionsButton = document.getElementById('grantPermissions');
        const testSpeakerButton = document.getElementById('testSpeakerButton');

        // Language mapping
        const speechLangMap = {
            'English (US)': 'en-US',
            'Hindi (India)': 'hi-IN',
            'Spanish (Spain)': 'es-ES',
            'French (France)': 'fr-FR',
            'German (Germany)': 'de-DE',
            'Arabic (Saudi Arabia)': 'ar-SA'
        };

        // Initialize speech synthesis
        const synth = window.speechSynthesis || null;
        let voices = [];

        // Load voices asynchronously
        function loadVoices() {
            return new Promise((resolve) => {
                if (!synth) {
                    statusBar.textContent = 'Text-to-speech not supported in this browser.';
                    resolve([]);
                    return;
                }
                voices = synth.getVoices();
                if (voices.length > 0) {
                    resolve(voices);
                } else {
                    synth.addEventListener('voiceschanged', () => {
                        voices = synth.getVoices();
                        resolve(voices);
                    }, { once: true });
                }
            });
        }

        // Speak text with fallback
        async function speakResponse(text, language) {
            if (!synth) {
                statusBar.textContent = 'Text-to-speech is unavailable. Displaying text only.';
                showBotMessage(text);
                return;
            }

            const langCode = speechLangMap[language] || 'en-US';
            await loadVoices();

            const utterance = new SpeechSynthesisUtterance(text);
            let selectedVoice = voices.find(voice => voice.lang === langCode);
            if (!selectedVoice) {
                console.warn(`No voice for ${langCode}. Falling back to English.`);
                selectedVoice = voices.find(voice => voice.lang.startsWith('en')) || voices[0];
                statusBar.textContent = `Voice for ${language} unavailable. Using English voice.`;
            }

            if (selectedVoice) {
                utterance.voice = selectedVoice;
                utterance.lang = selectedVoice.lang;
            } else {
                statusBar.textContent = 'No voices available for text-to-speech.';
                showBotMessage(text);
                return;
            }

            utterance.volume = 1.0;
            utterance.rate = 1.0;
            utterance.pitch = 1.0;
            utterance.onerror = (event) => {
                console.error('TTS error:', event.error);
                statusBar.textContent = 'Error in text-to-speech. Displaying text only.';
                showBotMessage(text);
            };

            utterance.onend = () => {
                console.log('TTS finished.');
                statusBar.textContent = '';
            };

            if (synth.speaking || synth.paused) {
                synth.cancel();
            }

            try {
                synth.speak(utterance);
            } catch (error) {
                console.error('TTS failed:', error);
                statusBar.textContent = 'Failed to play speech. Displaying text only.';
                showBotMessage(text);
            }

            document.getElementById('speech').addEventListener('click', () => {
                if (synth.speaking) synth.cancel();
            }, { once: true });
        }

        // Initialize TTS and test speaker
        async function testSpeaker() {
            if (!synth) {
                statusBar.textContent = 'Text-to-speech not supported in this browser.';
                return false;
            }

            try {
                await loadVoices();
                // Silent utterance to unlock audio in Safari
                const silentUtterance = new SpeechSynthesisUtterance('');
                silentUtterance.volume = 0;
                silentUtterance.onend = () => synth.cancel();
                silentUtterance.onerror = (event) => {
                    console.error('Silent TTS error:', event.error);
                    synth.cancel();
                };
                synth.speak(silentUtterance);
                await new Promise(resolve => setTimeout(resolve, 100));
                synth.cancel();

                // Test utterance
                const selectedLang = document.getElementById('languageSelector').value;
                const langCode = speechLangMap[selectedLang] || 'en-US';
                const utterance = new SpeechSynthesisUtterance('Speaker works fine');
                let selectedVoice = voices.find(voice => voice.lang === langCode);
                if (!selectedVoice) {
                    selectedVoice = voices.find(voice => voice.lang.startsWith('en')) || voices[0];
                    statusBar.textContent = `Voice for ${selectedLang} unavailable. Using English voice.`;
                }

                if (selectedVoice) {
                    utterance.voice = selectedVoice;
                    utterance.lang = selectedVoice.lang;
                } else {
                    statusBar.textContent = 'No voices available for text-to-speech.';
                    return false;
                }

                utterance.volume = 1.0;
                utterance.rate = 1.0;
                utterance.pitch = 1.0;
                utterance.onerror = (event) => {
                    console.error('Test TTS error:', event.error);
                    statusBar.textContent = 'Error testing speaker.';
                };
                utterance.onend = () => {
                    console.log('Test TTS finished.');
                    statusBar.textContent = 'Speaker test successful.';
                };

                synth.speak(utterance);
                return true;
            } catch (error) {
                console.error('TTS test failed:', error);
                statusBar.textContent = 'Failed to test speaker.';
                return false;
            }
        }

        // Request microphone permission
        async function requestMicPermission() {
            try {
                const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
                stream.getTracks().forEach(track => track.stop());
                return true;
            } catch (error) {
                console.error('Microphone permission error:', error);
                statusBar.textContent = 'Microphone access denied. Voice input unavailable.';
                return false;
            }
        }

        // Check and request all permissions
        async function checkAndRequestPermissions() {
            if (browser !== 'Safari') return true;

            try {
                const permissionStatus = await navigator.permissions.query({ name: 'microphone' });
                if (permissionStatus.state === 'granted') {
                    return await testSpeaker(); // Test speaker if mic is granted
                }
            } catch (error) {
                console.warn('Permission query not supported:', error);
            }

            permissionModal.style.display = 'flex';
            return new Promise((resolve) => {
                grantPermissionsButton.onclick = async () => {
                    permissionModal.style.display = 'none';
                    const micGranted = await requestMicPermission();
                    const ttsReady = micGranted ? await testSpeaker() : false;
                    if (!micGranted || !ttsReady) {
                        statusBar.textContent = 'Some permissions were not granted. Features may be limited.';
                    }
                    resolve(micGranted && ttsReady);
                };
            });
        }

        // Speech recognition
        function runSpeechRecog() {
            const selectedLang = document.getElementById('languageSelector').value;
            const action = document.getElementById('action');

            if (!window.SpeechRecognition && !window.webkitSpeechRecognition) {
                statusBar.textContent = 'Speech recognition not supported in this browser.';
                return;
            }

            const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
            recognition.lang = speechLangMap[selectedLang] || 'en-US';
            recognition.interimResults = false;
            recognition.continuous = false;

            recognition.onstart = () => {
                action.textContent = 'Listening...';
                statusBar.textContent = '';
            };

            recognition.onresult = (event) => {
                const transcript = event.results[0][0].transcript;
                action.textContent = '';
                sendMessage(transcript, selectedLang);
            };

            recognition.onerror = (event) => {
                action.textContent = '';
                statusBar.textContent = `Speech recognition error: ${event.error}`;
            };

            recognition.onend = () => {
                action.textContent = '';
            };

            try {
                recognition.start();
            } catch (error) {
                statusBar.textContent = 'Failed to start speech recognition.';
                console.error('STT error:', error);
            }
        }

        // Send message to Flask API
        async function sendMessage(message, language) {
            showUserMessage(message);
            try {
                const response = await fetch('/api/process_text', {
                    method: 'POST',
                    headers: { 'Content-Type': 'application/json' },
                    body: JSON.stringify({ text: message, language })
                });
                const data = await response.json();
                console.log('Response from Flask API:', data);
                handleResponse(data);
            } catch (error) {
                console.error('Error sending data to Flask API:', error);
                statusBar.textContent = 'Error: Unable to process request';
                showBotMessage('Error: Unable to process request');
            }
        }

        // Handle API response
        function handleResponse(data) {
            if (data.error) {
                statusBar.textContent = data.error;
                showBotMessage(data.error);
                return;
            }
            showBotMessage(data.response);
            speakResponse(data.response, data.language);
        }

        // Show user message
        function showUserMessage(message) {
            const chatBox = document.getElementById('chat-box');
            chatBox.innerHTML += `<div class="user-message">${message}</div>`;
            chatBox.scrollTop = chatBox.scrollHeight;
        }

        // Show bot message
        function showBotMessage(message) {
            const chatBox = document.getElementById('chat-box');
            chatBox.innerHTML += `<div class="bot-message">${message}</div>`;
            chatBox.scrollTop = chatBox.scrollHeight;
        }

        // Initialize
        window.addEventListener('load', async () => {
            await loadVoices();
            if (browser === 'Safari') {
                statusBar.textContent = 'Safari detected. Please grant microphone and speaker permissions.';
                const permissionsGranted = await checkAndRequestPermissions();
                if (!permissionsGranted) {
                    statusBar.textContent = 'Permissions denied. Some features may not work.';
                }
            }
            document.getElementById('speech').addEventListener('click', runSpeechRecog);
            testSpeakerButton.addEventListener('click', testSpeaker);
        });

        // Clean up on unload
        window.addEventListener('beforeunload', () => {
            if (synth && synth.speaking) synth.cancel();
        });
    </script>
</body>
</html>

localStorage dissappears after refresh. The array resets [duplicate]

I was following along a React project tutorial and writing the exact same code to the last bit but noticed that although the favorited movies would be saved to localStorage, upon refresh the array would reset. Then I noticed an error ‘MovieContext.jsx (the file with localStorage code) is not being reloaded’ just before the logs of favorited movies saved to localStorage before a refresh.

import { createContext, useState, useContext, useEffect } from "react";

const MovieContext = createContext()

export const useMovieContext = () => useContext(MovieContext);

export const MovieProvider = ({children}) => {
  const [favorites, setFavorites] = useState([]);

  useEffect(() => {
    console.log('useEffect on mount runs');
    const storedFavs = localStorage.getItem("favorites");
    console.log('storedFavs', storedFavs);

    if (storedFavs) {
      try {
        setFavorites(JSON.parse(storedFavs));
      } catch (err) {
        console.error('Failed to load from localStorage...', err);
      } 
      }
  }, []);

  useEffect(() => {
    try {
      localStorage.setItem("favorites", JSON.stringify(favorites));
      console.log('Saving to localStorage', favorites);
    } catch (error) {
      console.error('Failed to save to localStorage', error);
      
    }
  }, [favorites]);

  const addToFavorites = (movie) => {
    setFavorites(prev => [...prev, movie]);
  }

  const removeFromFavorites = (movieId) => {
    setFavorites(prev => prev.filter(movie => movie.id !== movieId));
  }

  const isFavorite = (movieId) => {
    return favorites.some(movie => movie.id === movieId);
  }

  const value = {
    favorites,
    addToFavorites,
    removeFromFavorites,
    isFavorite
  }

  return <MovieContext.Provider value={value}>
    {children}
  </MovieContext.Provider>
}

The App.jsx had the MovieProvider wrapper:

import "./css/App.css";
import Favorites from "./pages/Favorites";
import Home from "./pages/Home";
import { Routes, Route } from "react-router-dom";
import { MovieProvider } from "./contexts/MovieContext";
import NavBar from "./components/NavBar";

function App() {
  return (
    <MovieProvider>
      <NavBar />
      <main className="main-content">
        <Routes>
          <Route path="/" element={<Home />} />
          <Route path="/favorites" element={<Favorites />} />
        </Routes>
      </main>
    </MovieProvider>
  );
}

export default App;

Next.js App Crashes with SIGINT Error Using Turbopack and PNPM

When developing a Next.js application with GraphQL using Turbopack on an M4 MacBook (24GB RAM), the application starts correctly but exhibits the following behavior:

  1. After 15-20 minutes of runtime

  2. Subsequent page loads fail to complete

  3. Network requests (particularly .json requests) remain pending indefinitely

  4. Server eventually exits with SIGINT

http server closed - sigintnetwork log

I tried increasing the allocated TypeScript memory to 7 GB, but it still didn’t work.

How to extract OTP codes from temporary emails automatically for testing purposes?

I’m writing automated tests for registration forms that require email-based OTP verification.

Using real mailboxes (like Gmail or Outlook) is slow, hard to reset, and sometimes blocked. I need a way to receive one-time codes quickly and programmatically, without having to scrape full emails or deal with login processes.

Ideally, the service should support receiving verification codes from real platforms like Telegram, Discord, or visa appointment systems.

I tried using some common disposable email services like 10minutemail and TempMail, but most of them either:

  • Don’t offer an API,
  • Or return full HTML that I have to parse manually,
  • Or are blocked by many platforms.

I’m looking for a solution where I can just fetch the OTP or verification link in plain JSON via API, ideally filtered out from the message body.

Expected result:

json
{ “otp”: “123456” }

Disable and enable button in oracle apex

I have a button which uses DA and executes the server side code… when there is a low network user clicks the button multiple times which cause duplicate entry… I want to disable when user clicks the button and enable after completing the process… I have achieved this but when PLSQL raises the raise_application_error the disabled button is not enabled.. Tried this.. Let me know how to achieve this

Event: Click
Selection Type: Button
Button: ADD

True Action: Execute JavaScript Code
$(this.triggeringElement).prop('disabled', true);
$(this.triggeringElement).addClass('is-disabled');

True Action: Execute server side Code (which raises the error)

For this created another DA

Event: Custom
Custom Event: apexerror
Selection Type: JavaScript Expression

JavaScript Expression:
$('button.is-disabled').prop('disabled', false).removeClass('is-disabled');

enter image description here

How to make the gap not covered by the gradient. Or maybe there are some other ways to achieve this result?

I’m trying to apply a single background gradient to a grid of cards (statistic blocks) in React using Tailwind CSS. The goal is to have the gradient visible only on the cards, while the gaps between them remain transparent.

What I need:
enter image description here

What I have:
enter image description here

My code

<section className="container-base m-section">
      <Title title="By the numbers" />
      <div className="relative grid grid-cols-5 grid-rows-2 gap-4 h-[500px] mt-8 bg:[#181413] before:absolute before:inset-0 before:bg-gradient-to-b before:bg-[linear-gradient(135deg,_#ffdd55,_#ff55cc,_#88f,_#55ffff)] before:rounded-xl">
        {statisticData.map((item, index) => (
          <StatisticBlock
            className="z-10 "
            key={index}
            title={item.title}
            subtitle={item.subtitle}
            col={item.col}
            row={item.row}
          />
        ))}
      </div>
    </section>

StatisticBlock:

<div
      className={clsx(
        className,
        "flex flex-col items-center justify-center p-4 rounded-xl text-black text-center",
        col === 2 && "col-span-2",
        col === 3 && "col-span-3",
        row === 2 && "row-span-2",
        row === 3 && "row-span-3"
      )}
    >
      <div className="text-6xl leading-20 font-bold">{title}</div>
      <p className="text-2xl leading-6 font-bold">{subtitle}</p>
    </div>