Why isn’t i18next-fs-backend loading translation files from Onedrive?

I’m trying to load i18next translation files from a directory in OneDrive. When I try to do so, playwright keeps looking for the files in the C: drive. Is it imposible to load files from OneDrive? Can anyone tell me what I am doing wrong?

See my code below:

const localePath = path.resolve('./locales');
// resolves correctly to C:UsersmyuserpathOneDrive - MyStuffQA-Automationlocales

i18next
    .use(FsBackend)
        .init({
        lng: 'en',
        ns:['signin.page', 'chat.page','session.page', 'landing.page', 'userprofile.page'],   
        FsBackend:{ 
           FsBackendOptions: {loadPath: path.join(localePath, '/{{lng}}/{{ns}}.page.json'),},
        }
    })

//Expected result: 
//Translations should load from C:UsersmyuserpathOneDrive - MyStuffQA-AutomationlocalesenNAMESPACEFILE.page.json
//Actual result: 
//[Error: ENOENT: no such file or directory, open 'C:localesenNAMESPACEFILE.page.json'

Other things I have tried:

Using onedrive:

const myOneDrive = process.env.OneDriveCommercial;
const longerpath = path.join(myOneDrive, '/QA-Automation/locales/{{lng}}/{{ns}}.page.json');
i18next
    .use(FsBackend)
    .init({
     //blah blah
        FsBackend:{ FsBackendOptions: {loadPath: longerpath},},
    })

Hardcoding the full path:

const fullpath = 'C:UsersmyuserpathOneDrive - MyStuffQA-Automationlocales'
    FsBackend:{ FsBackendOptions: {loadPath: fullpath},},

Hardcoding the full path in init:

FsBackendOptions: {loadPath: 'C:/Users/myuserpath/OneDrive - MyStuff/QA-Automation/locales/{{lng}}/{{ns}}.page.json,},

Swearing:

const dev_reaction = '$#!(#&*$@$!!!'

The result is always the same. It ignores the Onedrive completely, and tries to load from c:localesetc.

Issue with TensorFlow.js Conversion – YOLOv8-Pose Not Detecting Hand & Wrist Keypoint

I’m looking for help with my machine learning model that detects my hand and one wrist keypoint. After training, the model correctly detects my hand with a bounding box and wrist keypoint in PyTorch. However, after converting the best.pt file to a TensorFlow.js model, the detection fails it no longer detects my hand or the keypoint.

Model Details

YOLOv8 trained for pose detection
Custom dataset with hand images and wrist keypoint annotations
Input size: 224×224
The model works correctly in PyTorch environment

Here is how i did my convertion

import os
from ultralytics import YOLO
import shutil
import tensorflow as tf
from google.colab import files

def find_saved_model(base_path):
    """Find the SavedModel directory in the export path"""
    for root, dirs, files in os.walk(base_path):
        if 'saved_model.pb' in files:
            return root
    return None

def add_signatures(saved_model_dir):
    """Load the SavedModel and add required signatures"""
    print("Adding signatures to SavedModel...")

    # Load the model
    model = tf.saved_model.load(saved_model_dir)

    # Create a wrapper function that matches the model's interface
    @tf.function(input_signature=[
        tf.TensorSpec(shape=[1, 640, 640, 3], dtype=tf.float32, name='images')
    ])
    def serving_fn(images):
        # Call model directly without training parameter
        return model(images)

    # Convert the model
    concrete_func = serving_fn.get_concrete_function()

    # Create a new SavedModel with the signature
    tf.saved_model.save(
        model,
        saved_model_dir,
        signatures={
            'serving_default': concrete_func
        }
    )

    print("Signatures added successfully")
    return saved_model_dir

def convert_to_tfjs(pt_model_path, output_dir):
    """
    Convert a PyTorch YOLO model to TensorFlow.js format

    Args:
        pt_model_path (str): Path to the .pt file
        output_dir (str): Directory to save the converted model
    """
    try:
        # Ensure output directory exists
        os.makedirs(output_dir, exist_ok=True)

        # Load the model
        print(f"Loading YOLO model from {pt_model_path}...")
        model = YOLO(pt_model_path)

        # First export to TensorFlow format
        print("Exporting to TensorFlow format...")

        # Export the model
        success = model.export(
            format='saved_model',
            imgsz=672,
            half=False,
            simplify=True
        )

        # Find the SavedModel directory
        saved_model_dir = find_saved_model(os.path.join(os.getcwd(), "best_saved_model"))
        if not saved_model_dir:
            raise Exception(f"Cannot find SavedModel directory in {os.path.dirname(pt_model_path)}")

        print(f"Found SavedModel at: {saved_model_dir}")

        # Add signatures to the model
        saved_model_dir = add_signatures(saved_model_dir)

        # Convert to TensorFlow.js
        print("Converting to TensorFlow.js format...")
        tfjs_target_dir = os.path.join(output_dir, 'tfjs_model')

        # Ensure clean target directory
        if os.path.exists(tfjs_target_dir):
            shutil.rmtree(tfjs_target_dir)
        os.makedirs(tfjs_target_dir)

        # Try conversion with modified parameters
        conversion_command = (
            f"tensorflowjs_converter "
            f"--input_format=tf_saved_model "
            f"--output_format=tfjs_graph_model "
            f"--saved_model_tags=serve "
            f"--control_flow_v2=True "
            f"'{saved_model_dir}' "
            f"'{tfjs_target_dir}'"
        )

        print(f"Running conversion command: {conversion_command}")
        result = os.system(conversion_command)

        if result != 0:
            raise Exception("TensorFlow.js conversion failed")

        # Verify conversion
        if not os.path.exists(os.path.join(tfjs_target_dir, 'model.json')):
            raise Exception("TensorFlow.js conversion failed - model.json not found")

        print(f"Successfully converted model to TensorFlow.js format")
        print(f"Output saved to: {tfjs_target_dir}")

        # Print model files
        print("nConverted model files:")
        for file in os.listdir(tfjs_target_dir):
            print(f"- {file}")

        # Create a zip file of the converted model
        shutil.make_archive(tfjs_target_dir, 'zip', tfjs_target_dir)

        # Download the zip file
        files.download("converted_model/tfjs_model.zip")

    except Exception as e:
        print(f"Error during conversion: {str(e)}")
        print("nDebug information:")
        print(f"Current working directory: {os.getcwd()}")
        print(f"PT model exists: {os.path.exists(pt_model_path)}")
        if 'saved_model_dir' in locals():
            print(f"SavedModel directory exists: {os.path.exists(saved_model_dir)}")
            if os.path.exists(saved_model_dir):
                print("SavedModel contents:")
                for root, dirs, files in os.walk(saved_model_dir):
                    print(f"nDirectory: {root}")
                    for f in files:
                        print(f"  - {f}")
        raise



# Upload your .pt model file
from google.colab import files
uploaded = files.upload()

#Get the filename of the uploaded file
pt_model_path = next(iter(uploaded.keys()))
output_dir = "converted_model"

# Convert the model
convert_to_tfjs(pt_model_path, output_dir)

Real-time Hand Pose Detection Web Application

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Real-time Hand Pose Detection</title>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script>
    <style>
        body { 
            text-align: center; 
            font-family: Arial, sans-serif;
            margin: 0;
            padding: 20px;
            background: #f0f0f0;
        }
        .container {
            position: relative;
            width: 640px;
            height: 480px;
            margin: 20px auto;
        }
        video, canvas { 
            position: absolute;
            left: 0;
            top: 0;
        }
        button {
            margin: 10px;
            padding: 10px 20px;
            font-size: 16px;
            cursor: pointer;
            background: #007bff;
            color: white;
            border: none;
            border-radius: 4px;
        }
        button:hover {
            background: #0056b3;
        }
        #status {
            padding: 10px;
            background: #fff;
            border-radius: 4px;
            display: inline-block;
        }
    </style>
</head>
<body>
    <h1>Real-time Hand Pose Detection (YOLOv8)</h1>
    <button onclick="loadModel()">Load Model</button>
    <button onclick="startWebcam()">Start Webcam</button>
    <p id="status">Model not loaded</p>

    <div class="container">
        <video id="video" width="640" height="480" autoplay></video>
        <canvas id="canvas" width="640" height="480"></canvas>
    </div>

    <script type="module">
        let model;
        let video = document.getElementById("video");
        let canvas = document.getElementById("canvas");
        let ctx = canvas.getContext("2d");

        const CONF_THRESHOLD = 0.7;
        const IOU_THRESHOLD = 0.45;
        let isProcessing = false;
        let previousDetections = [];

        // Model input size constants
        const MODEL_WIDTH = 640;
        const MODEL_HEIGHT = 640;
        const SCALE_FACTOR = 2.0; // Adjust this to make bbox larger

        async function loadModel() {
            try {
                document.getElementById("status").innerText = "Loading model...";
                model = await tf.loadGraphModel('http://localhost:8000/model.json');
                document.getElementById("status").innerText = "Model loaded!";
                console.log("Model loaded successfully");
            } catch (error) {
                console.error("Error loading model:", error);
                document.getElementById("status").innerText = "Error loading model!";
            }
        }

        async function startWebcam() {
            if (!model) {
                alert("Please load the model first!");
                return;
            }

            try {
                const stream = await navigator.mediaDevices.getUserMedia({ 
                    video: { 
                        width: { ideal: 640 },
                        height: { ideal: 480 },
                        facingMode: 'user'
                    } 
                });
                video.srcObject = stream;
                video.onloadedmetadata = () => {
                    video.play();
                    processVideoFrame();
                };
            } catch (err) {
                console.error("Error accessing webcam:", err);
                document.getElementById("status").innerText = "Error accessing webcam!";
            }
        }

        async function processVideoFrame() {
            if (!model || !video.videoWidth || isProcessing) return;
            
            try {
                isProcessing = true;
                
                // Create a square input for the model while maintaining aspect ratio
                const offscreenCanvas = document.createElement('canvas');
                offscreenCanvas.width = MODEL_WIDTH;
                offscreenCanvas.height = MODEL_HEIGHT;
                const offscreenCtx = offscreenCanvas.getContext('2d');
                
                // Calculate scaling to maintain aspect ratio
                const scale = Math.min(MODEL_WIDTH / video.videoWidth, MODEL_HEIGHT / video.videoHeight);
                const scaledWidth = video.videoWidth * scale;
                const scaledHeight = video.videoHeight * scale;
                const offsetX = (MODEL_WIDTH - scaledWidth) / 2;
                const offsetY = (MODEL_HEIGHT - scaledHeight) / 2;
                
                offscreenCtx.fillStyle = 'black';
                offscreenCtx.fillRect(0, 0, MODEL_WIDTH, MODEL_HEIGHT);
                offscreenCtx.drawImage(video, offsetX, offsetY, scaledWidth, scaledHeight);
                
                const imgTensor = tf.tidy(() => {
                    return tf.browser.fromPixels(offscreenCanvas)
                        .expandDims(0)
                        .toFloat()
                        .div(255.0);
                });
        
                const predictions = await model.predict(imgTensor);
                imgTensor.dispose();
                
                const processedDetections = await processDetections(predictions, {
                    offsetX,
                    offsetY,
                    scale,
                    originalWidth: video.videoWidth,
                    originalHeight: video.videoHeight
                });
                
                const smoothedDetections = smoothDetections(processedDetections);
                drawDetections(smoothedDetections);
                
                previousDetections = smoothedDetections;
                
                if (Array.isArray(predictions)) {
                    predictions.forEach(p => p.dispose());
                } else {
                    predictions.dispose();
                }
                
            } catch (error) {
                console.error("Error in processing frame:", error);
            } finally {
                isProcessing = false;
                requestAnimationFrame(processVideoFrame);
            }
        }

        async function processDetections(predictionTensor, transformInfo) {
            const predictions = await predictionTensor.array();
            
            if (!predictions.length || !predictions[0].length) {
                return [];
            }
            
            let detections = [];
            const numDetections = predictions[0][0].length;
            
            for (let i = 0; i < numDetections; i++) {
                const confidence = predictions[0][4][i];
                
                if (confidence > CONF_THRESHOLD) {
                    // Get raw coordinates from model output
                    let x = (predictions[0][0][i] - transformInfo.offsetX) / transformInfo.scale;
                    let y = (predictions[0][1][i] - transformInfo.offsetY) / transformInfo.scale;
                    let width = (predictions[0][2][i] / transformInfo.scale) * SCALE_FACTOR;
                    let height = (predictions[0][3][i] / transformInfo.scale) * SCALE_FACTOR;
                    
                    // Get keypoint (assuming wrist point)
                    let kp_x = (predictions[0][5][i] - transformInfo.offsetX) / transformInfo.scale;
                    let kp_y = (predictions[0][6][i] - transformInfo.offsetY) / transformInfo.scale;
                    
                    // Normalize coordinates
                    x = x / transformInfo.originalWidth;
                    y = y / transformInfo.originalHeight;
                    width = width / transformInfo.originalWidth;
                    height = height / transformInfo.originalHeight;
                    kp_x = kp_x / transformInfo.originalWidth;
                    kp_y = kp_y / transformInfo.originalHeight;
                    
                    // Ensure coordinates are within bounds
                    x = Math.max(0, Math.min(1, x));
                    y = Math.max(0, Math.min(1, y));
                    kp_x = Math.max(0, Math.min(1, kp_x));
                    kp_y = Math.max(0, Math.min(1, kp_y));
                    
                    detections.push({
                        bbox: [x, y, width, height],
                        confidence,
                        keypoint: [kp_x, kp_y]
                    });
                }
            }
            
            return applyNMS(detections);
        }

        function smoothDetections(currentDetections) {
            if (!previousDetections.length) return currentDetections;
            
            return currentDetections.map(detection => {
                const prevDetection = findClosestPreviousDetection(detection, previousDetections);
                if (prevDetection) {
                    const alpha = 0.7;
                    return {
                        bbox: detection.bbox.map((coord, i) => 
                            alpha * coord + (1 - alpha) * prevDetection.bbox[i]
                        ),
                        confidence: detection.confidence,
                        keypoint: detection.keypoint.map((coord, i) => 
                            alpha * coord + (1 - alpha) * prevDetection.keypoint[i]
                        )
                    };
                }
                return detection;
            });
        }

        function findClosestPreviousDetection(detection, previousDetections) {
            if (!previousDetections.length) return null;
            
            let minDist = Infinity;
            let closestDetection = null;
            
            previousDetections.forEach(prevDetection => {
                const dist = Math.sqrt(
                    Math.pow(detection.keypoint[0] - prevDetection.keypoint[0], 2) +
                    Math.pow(detection.keypoint[1] - prevDetection.keypoint[1], 2)
                );
                
                if (dist < minDist) {
                    minDist = dist;
                    closestDetection = prevDetection;
                }
            });
            
            return minDist < 0.3 ? closestDetection : null;
        }

        function calculateIoU(box1, box2) {
            const [x1, y1, w1, h1] = box1;
            const [x2, y2, w2, h2] = box2;
            
            const x1min = x1 - w1/2;
            const x1max = x1 + w1/2;
            const y1min = y1 - h1/2;
            const y1max = y1 + h1/2;
            
            const x2min = x2 - w2/2;
            const x2max = x2 + w2/2;
            const y2min = y2 - h2/2;
            const y2max = y2 + h2/2;
            
            const xOverlap = Math.max(0, Math.min(x1max, x2max) - Math.max(x1min, x2min));
            const yOverlap = Math.max(0, Math.min(y1max, y2max) - Math.max(y1min, y2min));
            
            const intersectionArea = xOverlap * yOverlap;
            const union = w1 * h1 + w2 * h2 - intersectionArea;
            
            return intersectionArea / union;
        }

        async function applyNMS(detections) {
            detections.sort((a, b) => b.confidence - a.confidence);
            
            const selected = [];
            const active = new Set(Array(detections.length).keys());
            
            for (let i = 0; i < detections.length; i++) {
                if (!active.has(i)) continue;
                
                selected.push(detections[i]);
                
                for (let j = i + 1; j < detections.length; j++) {
                    if (!active.has(j)) continue;
                    
                    const iou = calculateIoU(detections[i].bbox, detections[j].bbox);
                    if (iou >= IOU_THRESHOLD) active.delete(j);
                }
            }
            
            return selected;
        }

        function drawDetections(detections) {
            ctx.clearRect(0, 0, canvas.width, canvas.height);
            ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
            
            detections.forEach(detection => {
                const [x, y, width, height] = detection.bbox;
                const [keypointX, keypointY] = detection.keypoint;
                
                // Convert normalized coordinates to pixel values
                const boxX = (x - width/2) * canvas.width;
                const boxY = (y - height/2) * canvas.height;
                const boxWidth = width * canvas.width;
                const boxHeight = height * canvas.height;
                
                // Draw bounding box
                ctx.strokeStyle = 'red';
                ctx.lineWidth = 2;
                ctx.strokeRect(boxX, boxY, boxWidth, boxHeight);
                
                // Draw keypoint
                const kpX = keypointX * canvas.width;
                const kpY = keypointY * canvas.height;
                
                ctx.fillStyle = 'blue';
                ctx.beginPath();
                ctx.arc(kpX, kpY, 5, 0, 2 * Math.PI);
                ctx.fill();
                
                // Draw confidence score
                ctx.fillStyle = 'red';
                ctx.font = '14px Arial';
                ctx.fillText(`Conf: ${detection.confidence.toFixed(2)}`, boxX, boxY - 5);

                // Draw lines from bbox center to keypoint
                ctx.beginPath();
                ctx.moveTo(boxX + boxWidth/2, boxY + boxHeight/2);
                ctx.lineTo(kpX, kpY);
                ctx.strokeStyle = 'green';
                ctx.stroke();
            });
        }

        window.loadModel = loadModel;
        window.startWebcam = startWebcam;
    </script>
</body>
</html>

Hand wrist detection

import os
import onnx
import time
import yaml
import torch
import numpy as np
from pathlib import Path
from ultralytics import YOLO

class HandWristDetector:
    def __init__(self, config_path='config.yaml'):
        """
        Initialize HandWristDetector with configuration
        
        Args:
            config_path (str): Path to the configuration YAML file
        """
        with open(config_path, 'r') as f:
            self.config = yaml.safe_load(f)
        
        # Initialize YOLO pose detection model
        model_size = self.config['model']['size']
        model_path = f"yolov8{model_size}-pose.pt"
        
        # Download model if not exists
        if not os.path.exists(model_path):
            print(f"Downloading YOLOv8{model_size} pose model...")
        
        self.model = YOLO(model_path)
        
    def train(self, data_yaml):
        """
        Train the model with custom configuration
        
        Args:
            data_yaml (str): Path to the data YAML file containing dataset configuration
            
        Returns:
            results: Training results object
        """
        # Set training arguments
        args = dict(
            data=data_yaml,                    
            task='pose',                       
            mode='train',                      
            model=self.model,                  
            epochs=self.config['model']['epochs'],
            imgsz=self.config['model']['image_size'],
            batch=self.config['model']['batch_size'],
            device='',                         
            workers=8,                         
            optimizer='AdamW',                  
            patience=20,                       
            verbose=True,                      
            seed=0,                           
            deterministic=True,                
            single_cls=True,                   
            rect=True,                         
            cos_lr=True,                       
            close_mosaic=10,                   
            resume=False,                      
            amp=True,                          
            
            # Learning rate settings
            lr0=0.001,                        
            lrf=0.01,                         
            momentum=0.937,                    
            weight_decay=0.0005,              
            warmup_epochs=3.0,                
            warmup_momentum=0.8,              
            warmup_bias_lr=0.1,               
            
            # Loss coefficients
            box=7.5,                          
            cls=0.5,                          
            pose=12.0,                        
            kobj=2.0,                         
            
            # Augmentation settings
            degrees=10.0,                      
            translate=0.2,                    
            scale=0.7,                        
            fliplr=0.5,                       
            mosaic=1.0,                       
            mixup=0.0,                        
            
            # Saving settings
            project='runs/pose',              
            name='train',                     
            exist_ok=False,                   
            pretrained=True,                  
            plots=True,                       
            save=True,                        
            save_period=-1,                   
            
            # Validation settings
            val=True,                         
            save_json=False,                  
            conf=None,                        
            iou=0.7,                          
            max_det=300,                      
            
            # Advanced settings
            fraction=1.0,                    
            profile=False,                    
            overlap_mask=True,                
            mask_ratio=4,                     
            dropout=0.2,                      
            label_smoothing=0.1,              
            nbs=64,                          
        )
        
        # Start training
        try:
            results = self.model.train(**args)
            return results
        except Exception as e:
            print(f"Training error: {str(e)}")
            raise
    
    def evaluate(self, data_yaml):
        """
        Evaluate the model on validation/test set
        
        Args:
            data_yaml (str): Path to the data YAML file
            
        Returns:
            results: Validation results object
        """
        try:
            results = self.model.val(
                data=data_yaml,
                imgsz=self.config['model']['image_size'],
                batch=self.config['model']['batch_size'],
                conf=0.25,
                iou=0.7,
                device='',
                verbose=True,
                save_json=False,
                save_hybrid=False,
                max_det=300,
                half=False
            )
            return results
        except Exception as e:
            print(f"Evaluation error: {str(e)}")
            raise
    
    def export_model(self, format='onnx'):
        """
        Export the model to specified format
        
        Args:
            format (str): Format to export to ('onnx' or 'tflite')
        """
        try:
            if format == 'onnx':
                self.model.export(
                    format='onnx',
                    dynamic=True,
                    simplify=True,
                    opset=11,
                    device='cpu'
                )
            elif format == 'tflite':
                self.model.export(
                    format='tflite',
                    int8=True,
                    device='cpu'
                )
        except Exception as e:
            print(f"Export error: {str(e)}")
            raise
    
    def predict(self, image_path):
        """
        Run inference on a single image
        
        Args:
            image_path (str): Path to the input image
            
        Returns:
            results: Detection results object
        """
        try:
            results = self.model.predict(
                source=image_path,
                conf=0.25,
                iou=0.45,
                imgsz=self.config['model']['image_size'],
                device='',
                verbose=False,
                save=True,
                save_txt=False,
                save_conf=False,
                save_crop=False,
                show_labels=True,
                show_conf=True,
                max_det=300,
                agnostic_nms=False,
                classes=None,
                retina_masks=False,
                boxes=True
            )
            return results[0]
        except Exception as e:
            print(f"Prediction error: {str(e)}")
            raise
    
    def predict_batch(self, image_paths):
        """
        Run inference on a batch of images
        
        Args:
            image_paths (list): List of paths to input images
            
        Returns:
            results: List of detection results objects
        """
        try:
            results = self.model.predict(
                source=image_paths,
                conf=0.25,
                iou=0.45,
                imgsz=self.config['model']['image_size'],
                batch=self.config['model']['batch_size']
            )
            return results
        except Exception as e:
            print(f"Batch prediction error: {str(e)}")
            raise

config.yaml

paths: 
  hand_img_dir: "/train/images"
  non_hand_dir: "/non-hands"        
  annotations_dir: "/train/labels"
  output_dir: "/Hand_wrist_keypoint"


model:
  size: "n"  
  epochs: 50  
  image_size: 224  
  batch_size: 16  
  pretrained: true 
  conf_thres: 0.25  
  iou_thres: 0.45  
  device: ""  

training:
  train_ratio: 0.7
  val_ratio: 0.15
  seed: 42

**
What Actually Happened:**

The model does detect things, but with significant issues:

The bounding boxes and keypoints appear, but not where they should be – they’re incorrectly positioned relative to my actual hand
Multiple overlapping detections occur for a single hand, suggesting NMS isn’t working properly
The model unexpectedly detects my face, even though it was trained only for hand detection
There’s no stability in the detections – they jitter and move erratically
While the model technically “works” (it produces outputs), the detections are so misaligned and unstable that they’re unusable

How to Reboot android App in React Native?

How to Reboot android App in React Native ??

I am trying to restart my React Native app programmatically on Android, but none of the solutions available online seem to work in production. I have tried multiple approaches, but none have provided a reliable way to reboot the app automatically.

on click, image change. image not changing positions (javascript, html, css)

i’m trying to make it so when you click on image 1, image 2 appears, but in a different position than image 1. Right now if you click on image 1, image 2 replaces image 1 in the same position image 1 was in instead of appearing in a different position on the page.

var myImage = document.querySelector(‘img’); myImage.onclick = function() { var mySrc = myImage.getAttribute(‘src’); if(mySrc === ‘site/photos/Untitled1095\_20250217004721.png’) { myImage.setAttribute (‘src’, ‘site/photos/Untitled1098\_20250217153316.png’); } else { myImage.setAttribute (‘src’, ‘site/photos/Untitled1095\_20250217004721.png’); } }
\#bartender { position: absolute; top: 20.5px; left: 743px; } \#drink { position: absolute; top: 0px; left: 0px; }

Is there a way to set the entire content of a document?

I’m writing some unit tests with jest. I’ve captured the rendered HTML of some web pages in files and want to use those as data for my tests. I can load up the <body> of the saved HTML with:

const fs = require('node:fs');
html = fs.readFileSync('blah-blah-blah.body.html', 'utf8');
document.body.innerHTML = html;

which works, but it’s annoying because while it’s easy to get the full document with curl url -o blah-blah-blah.html, it’s more complicated to post-process that to pull out just the <body> element. I tried the obvious thing:

html = fs.readFileSync('blah-blah-blah.document.html', 'utf8');
document.documentElement = html;

but apparently document.documentElement is read-only, so sadness.

Jqgrid performance slow at large dataset

I’m trying to retrieve data from database to show in a jqgrid.
This code works well and fast for small data. But when it show thousands of data it gets real slow, like more than 2 mins which is an issue for the user.

Now, in the jqgrid i’ve used a paging but it seems still slow when i retrieve thousands of data.
***Is there any way to improve this? ***

note : in below code im working with java spring mvc

another note : in the html i’m using the imui:listTable cause im working with an intramart environment but its basically using jqGrid

Repository.java

public List<Map<String, Object>> selectGRList(String search_transtype, String[] search_grno, String search_grdatef,
            String search_grdatet, String[] search_deliverynote, String[] search_materialno, String user_cd)
            throws Exception {

        System.out.println("At InvoicingRepository.selectGrList()");

        try {
            InvoicingWorkflowService invoicingWorkflowService = new InvoicingWorkflowService();
            ItemListNode[] ItemListNodeArray2 = invoicingWorkflowService.getItemsByCategory("vat_master");
            String[] validfromArray = new String[ItemListNodeArray2.length];
            for (int i = 0; i < ItemListNodeArray2.length; i++) {
                validfromArray[i] = ItemListNodeArray2[i].getItemCd();
            }

            // Sort validfromArray to improve searching later
            Arrays.sort(validfromArray);

            String doc_type = "ZN01";
            String plant = "D4N1";

            SQLManager sqlManager = new SQLManager();
            String sql = "SELECT * FROM v_gr_list_w_price_v2 ";
            Collection<Object> parameters = new ArrayList<>();
            boolean hasWhere = false;

            // Apply filters (as before)
            if (!user_cd.equals("tenant")) {
                sql += "WHERE vendor_code LIKE ? ";
                parameters.add(user_cd.substring(0, 7) + "%");
                hasWhere = true;
            }

            // TR Type filtering
            if (search_transtype != null && !search_transtype.isEmpty()) {
                sql += (hasWhere ? "AND " : "WHERE ") + "tr_type = ? ";
                parameters.add(search_transtype);
                hasWhere = true;
            }

            // Date filtering
            if (!search_grdatef.isEmpty() && !search_grdatet.isEmpty()) {
                sql += (hasWhere ? "AND " : "WHERE ") + "gr_date BETWEEN ? AND ? ";
                parameters.add(search_grdatef);
                parameters.add(search_grdatet);
                hasWhere = true;
            }

            // Filter for GR number
            if (search_grno != null && search_grno.length > 0 && !search_grno[0].isEmpty()) {
                sql += (hasWhere ? "AND " : "WHERE ") + "gr_doc IN ("
                        + String.join(", ", Collections.nCopies(search_grno.length, "?")) + ") ";
                parameters.addAll(Arrays.asList(search_grno));
                hasWhere = true;
            }

            // Filter for delivery note
            if (search_deliverynote != null && search_deliverynote.length > 0 && !search_deliverynote[0].isEmpty()) {
                sql += (hasWhere ? "AND " : "WHERE ") + "delivery_note IN ("
                        + String.join(", ", Collections.nCopies(search_deliverynote.length, "?")) + ") ";
                parameters.addAll(Arrays.asList(search_deliverynote));
                hasWhere = true;
            }

            // Filter for material number
            if (search_materialno != null && search_materialno.length > 0 && !search_materialno[0].isEmpty()) {
                sql += (hasWhere ? "AND " : "WHERE ") + "material_number IN ("
                        + String.join(", ", Collections.nCopies(search_materialno.length, "?")) + ") ";
                parameters.addAll(Arrays.asList(search_materialno));
            }

            long startTime = System.currentTimeMillis(); // Start timing

            List<InvoicingGRListModel> sqlResults = (List<InvoicingGRListModel>) (sqlManager
                    .select(InvoicingGRListModel.class, sql, parameters));

            long endTime = System.currentTimeMillis(); // End timing

            System.out.println("Query Execution Time: " + (endTime - startTime) + " ms");

            List<Map<String, Object>> result = new ArrayList<Map<String, Object>>();
            DecimalFormat decimalFormat = new DecimalFormat("#,###.##");

            // Loop through each record and populate the map
            for (InvoicingGRListModel invoicingGRListModel : sqlResults) {
                Integer grDate = Integer.parseInt(invoicingGRListModel.getGr_date());

                // Use binary search to find the maxDate instead of iterating through all values
                int maxDate = findMaxDate(validfromArray, grDate);

                // Cache the result of invoicingWorkflowService.getItem() once per loop
                // iteration
                // Correct the type here to match the actual return type
                Item item = invoicingWorkflowService.getItem(Integer.toString(maxDate));
                Integer validto = Integer.parseInt(item.getItemShortName());
                String vatrate = item.getItemName();
                BigDecimal vatPercentage = new BigDecimal(vatrate).divide(BigDecimal.valueOf(100.0), 4,
                        RoundingMode.HALF_UP);

                // Calculations
                BigDecimal amount = new BigDecimal(invoicingGRListModel.getAmount());
                BigDecimal net_price = new BigDecimal(invoicingGRListModel.getNet_price());
                BigDecimal vat_amount = amount.multiply(vatPercentage).setScale(2, RoundingMode.HALF_UP);
                BigDecimal total_price = net_price.add(vat_amount).setScale(2, RoundingMode.HALF_UP);

                // Create the map for this record
                Map<String, Object> valueMap = new HashMap<>();
                valueMap.put("f_doc_type", invoicingGRListModel.getDoc_type());
                valueMap.put("f_plant_code", invoicingGRListModel.getPlant_code());
                valueMap.put("f_gr_no_line", invoicingGRListModel.getGr_no_line());
                valueMap.put("f_gr_no", invoicingGRListModel.getGr_doc());
                valueMap.put("f_gr_line", invoicingGRListModel.getGr_doc_item());
                valueMap.put("f_gr_date", invoicingGRListModel.getGr_date());
                valueMap.put("f_delivery_note", invoicingGRListModel.getDelivery_note());
                valueMap.put("f_item", invoicingGRListModel.getItem());
                valueMap.put("f_material_number", invoicingGRListModel.getMaterial_number());
                valueMap.put("f_vendor_code", invoicingGRListModel.getVendor_code());
                valueMap.put("f_vendor_name", invoicingGRListModel.getVendor_name());
                valueMap.put("f_po_no", invoicingGRListModel.getPo_no());
                valueMap.put("f_po_item", invoicingGRListModel.getPo_item());
                valueMap.put("f_material_name", invoicingGRListModel.getMaterial_name());
                valueMap.put("f_qty", invoicingGRListModel.getGr_quantity());
                valueMap.put("f_unit", invoicingGRListModel.getUnit());
                valueMap.put("f_pricing_date", invoicingGRListModel.getPricing_date());
                valueMap.put("f_purch_group", invoicingGRListModel.getPurch_group());
                valueMap.put("f_currency", invoicingGRListModel.getCurrency());
                valueMap.put("f_vat_percent", vatrate);
                valueMap.put("f_price", decimalFormat.format(net_price.doubleValue()));
                valueMap.put("f_amount_item", decimalFormat.format(amount.doubleValue()));
                valueMap.put("f_vat_amount", decimalFormat.format(vat_amount.doubleValue()));
                valueMap.put("f_transaction_type", invoicingGRListModel.getTr_type());

                // Add the populated map to the result list
                result.add(valueMap);
            }

            return result;

        } catch (SQLException | AccessSecurityException | IllegalArgumentException | InstantiationException
                | IllegalAccessException | InvocationTargetException | NamingException e) {
            e.printStackTrace();
            throw new Exception("DB error in selectGRList()", e);
        }
    }

javascript :

$('#requestSearchGR').click(function() {
        var searchtranstype = $('#i_transaction_type').val();
        var searchgrno = $('#i_search_grno').val();
        var searchgrdatef = $('#i_search_grdate_f').val();
        var searchgrdatet = $('#i_search_grdate_t').val();
        var searchdeliverynote = $('#i_search_deliverynote').val();
        var searchmaterialno = $('#i_search_materialno').val();

        // Validate Date Search
        if ((searchgrdatef.length == 0 && searchgrdatet.length != 0) || (searchgrdatef.length != 0 && searchgrdatet.length == 0)) {
            showErrorDialog("Please input both Date From and To, if searching by GR Date!");
            return;
        }

        $('#loadingIndicator').show();
        $('#progressText').text('Loading data...');

        let progressUpdate = setInterval(function() {
            let progress = parseInt($('#progressText').text().replace(/D/g, '')) || 0;
            if (progress < 80) $('#progressText').text('Loading data... ');
        }, 1000); // Update every 1s instead of 500ms

        // Start the timer
        const startTime = performance.now();

        $.ajax({
            url: 'invoicing/searchgr',
            type: 'POST',
            data: {
                transaction_type: searchtranstype,
                search_grno: searchgrno,
                search_grdate_f: searchgrdatef,
                search_grdate_t: searchgrdatet,
                search_deliverynote: searchdeliverynote,
                search_materialno: searchmaterialno
            },
            dataType: 'json',
            cache: false,
            success: function(returnObj) {
                // Stop the timer
                const endTime = performance.now();
                const duration = (endTime - startTime) / 1000; // Time in seconds
                console.log('Data loading time: ' + duration + ' seconds'); // Log it or display somewhere

                clearInterval(progressUpdate);
                $('#progressText').text('Loading data... ');
                setTimeout(() => $('#loadingIndicator').hide(), 500);

                if (!returnObj.gridGRList || returnObj.gridGRList.length === 0) {
                    showErrorDialog("No Data Found!");
                    $("#listSearchGR").clearGridData();
                } else {
                    $("#listSearchGR").clearGridData().setGridParam({
                        data: returnObj.gridGRList.filter((data) => !filter.includes(data.f_gr_no_line))
                    }).trigger("reloadGrid");
                }
            },
            error: function(error) {
                clearInterval(progressUpdate);
                $('#loadingIndicator').hide();
                console.log(JSON.stringify(error));
                $("#listSearchGR").clearGridData();
            }
        });
    });


html :

<imui:listTable id="listSearchGR" class="listSearchGR" name="listSearchGR" data="${listSearchGIResult}" autoWidth="true" height="250" multiSelect="true" checkBoxOnly="true" loadonce="true">
                    <pager rowNum="10" rowList="5,10,25, 50, 100, 200, 500" />
                    <cols>
                        <col name="f_plant_code" caption="Plant Code" />
                        <col name="f_doc_type" hidden="true"/>
                        <col name="f_gr_no_line" hidden="true" />
                        <col name="f_gr_no" caption="GR No" sortType="text" />
                        <col name="f_gr_line" caption="GR Line" />
                        <col name="f_gr_date" caption="GR Date" />
                        <col name="f_delivery_note" caption="Delivery Note" />
                        <col name="f_item" caption="Item" hidden="true" />
                        <col name="f_material_number" caption="Material No" />
                        <col name="f_material_name" caption="Material Name" />
                        <col name="f_qty" caption="Qty" align="right" />
                        <col name="f_unit" caption="Unit" />
                        <col name="f_pricing_date" caption="Pricing Date" />
                        <col name="f_purch_group" caption="Purchasing Group" />
                        <col name="f_currency" caption="Currency" />
                        <col name="f_price" caption="Price" align="right" />
                        <col name="f_amount_item" caption="Amount" align="right" />
                        <col name="f_vat_percent" caption="VAT %" align="right" />
                        <col name="f_vat_amount" caption="VAT" align="right" />
                        <col name="f_transaction_type" caption="Trans. Type" align="right" />
                        <col name="f_vendor_code" caption="Vendor Code" align="right" hidden="true" />
                        <col name="f_vendor_name" caption="Vendor Name" align="right" hidden="true" />
                        <col name="f_po_no" caption="Po No" align="right" hidden="true" />
                        <col name="f_po_item" caption="Po Item" align="right" hidden="true" />
                    </cols>
                </imui:listTable>

React and tailwind button hidden because of global generated css

I am trying to render a blue button on my screen in react. However, the following auto generated tailwind class from my out.css file is causing all buttons to have a transparent background. How can I disable this global logic only for buttons? If I change this file, it is overwritten on the next compile because I am running the following command when running my app:

tailwindcss -i ../main.css -o ../out.css --watch

out.css:

/*
! tailwindcss v3.2.4 | MIT License | https://tailwindcss.com
*/

...

/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Remove default button styles.
*/

button,
[type='button'],
[type='reset'],
[type='submit'] {
  -webkit-appearance: button;
  /* 1 */
  background-color: transparent;
  /* 2 */
  background-image: none;
  /* 2 */
}

...

Here is inspecting dev tools:
dev tools screenshot showing the global override

jQuery show() not working and giving warning

I have the following line intended to show a <div>.

$div.show();

Although I verified $div has a length of 1, it doesn’t not display the element. Moreover, I get a warning in the console.

[Deprecation] The ‘textprediction’ attribute will be removed in the future. Use the ‘writingsuggestions’ attribute instead. See https://learn.microsoft.com/en-us/microsoft-edge/web-platform/site-impacting-changes for more information.

enter image description here

Question 1: How to troubleshoot show() not having any effect? And could the issue be related to the warning?

Question 2: Why am I getting this warning? Where am I using the textprediction attribute?

Is there a way to avoid the `Secure coding is not enabled for restorable state…` warning for Node scripts?

I’m writing some automated tests using node.js and Cypress, and since I have a lot of them I want to group them split them off onto their own files. I have started doing this, eg:

//myscripts/prodPageTests.js

const { exec } = require('child_process');

let command = "npx cypress run --headless --browser chrome --spec './cypress/e2e/LoremProdPage.feature'";
// Other tests to come.
// Probably in some kind of loop with args.

exec(command, (error, stdout, stderr) => {
    if (error) {
        console.error(`Error executing command: ${error}`);
        return;
    }
    console.log(`Output:n${stdout}`);
    if (stderr) {
        console.error(`Error Output:n${stderr}`);
    }
});

This works, but after it runs I also get the warning:

Error Output:

DevTools listening on ws://127.ip.address/devtools/browser/<some hash>
2025-02-19 10:18:07.287 Cypress[18578:4259699] WARNING: Secure coding is not enabled for restorable state! Enable secure coding by implementing NSApplicationDelegate.applicationSup

Which I understand to be something to do with the Mac Sonoma OS.

I’m just curious: is there a work around for this with node.js?

Is it possible to populate a table with missing tags using jquery?

I have the following table, with two <td>---</td>

<table>
  <tr>
    <td>Item 1</td>
    <td>Item 2</td>
    <td>Item 3</td>
  </tr>
  <tr>
    <td>Item 1</td>
    <td>---</td>
    <td>---</td>
  </tr>
</table>

What I’m looking for is to detect if the tr has 3 td tags, in case it only has 2 or 1, the missing td tags are automatically added.
For this example I only used 3 td for each tr, but I hope to be able to use more td. I don’t know if it’s possible, or if I explained myself correctly, but I hope someone can guide me.

Sorting & saving mixed link/embed items in a draggable list causes data loss

I have a dashboard in a PHP + JS project where I manage two types of items in the same list: links (text + URL) and embeds (type + URL). They’re both appended into a single container (#links-container) and can be reordered via Sortable.js. After dragging them around and saving, sometimes the newly added links are either missing, empty, or saved incorrectly.

What I want

Let users add/edit links and embeds.
Allow them to drag and reorder these items in any order.
Successfully save all data, preserving everything intact.
What happens
When I add one link and one embed, then drag the embed above the link (so embed is index 0, link is index 1), after submitting the form, the link data sometimes ends up empty or gone altogether in the database. In other attempts, the embed data seems fine, but the link data disappears or is inconsistent.

Relevant code

HTML (simplified):

<form id="edit-page-form" method="POST">
  <input type="hidden" name="items_order" id="items_order" />

  <div id="links-container">
    <!-- link-item or embed-item appended dynamically via JS -->
    <!-- both have class="link-item", but data-type="link" or "embed" -->
  </div>
</form>

JS (dynamic-inputs.js snippet):

if (linksContainerEl) {
  new Sortable(linksContainerEl, {
    handle: '.drag-handle',
    animation: 400,
    onEnd: function() {
      // Previously, we tried re-indexing each item’s name attributes.
      // That broke things, so we removed it.

      // Now we just update data-index:
      $('#links-container .link-item').each(function(i) {
        $(this).attr('data-index', i);
      });

      // Then build an order array like "embed-0,link-1"
      let orderData = [];
      $('#links-container .link-item').each(function() {
        const itemType = $(this).data('type'); // "link" or "embed"
        const itemIndex = $(this).data('index');
        orderData.push(`${itemType}-${itemIndex}`);
      });
      $('#items_order').val(orderData.join(','));
    }
  });
}

Troubleshooting

Removed all re-index logic that rewrote name attributes.
Logged the final FormData in the console. Everything seems correct, but once it hits the server, the link data can be blank.
Sometimes it fully saves, sometimes not. The inconsistency is confusing.
Question
How do I handle sorting both links and embeds in the same container and reliably maintain their data on submission? Is there a recommended best practice for a scenario where two different item types share a single list? Any insights are greatly appreciated.

Thank you in advance!

How to keep background.JS as a service worker but still use import statements in my extension?

I am working on a web extension and whenever I try to import any function from my extension scripts in my background.js It says service worker (inactive) and I can not actually inspect it.

I want to import functions in background.js but still keep its capability as a service worker is that somehow possible?

Below is my vite.config

import { defineConfig } from 'vite';
import vue from '@vitejs/plugin-vue';
import webExtension from '@samrum/vite-plugin-web-extension';
import { nodePolyfills } from 'vite-plugin-node-polyfills';
import path from 'path';
import { fileURLToPath } from 'url';

const __dirname = path.dirname(fileURLToPath(import.meta.url));
const srcDir = path.resolve(__dirname, 'src');

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [
    vue(),
    nodePolyfills({
      globals: {
        Buffer: true,
        global: true,
        process: true
      },
      protocolImports: true
    }),
    webExtension({
      manifest: {
        manifest_version: 3,
        name: 'Layer VOne (Testnet)',
        version: '0.0.5',
        description: 'Help me make the extension that brings native Layer 1 and Web 3 together on The Verus Blockchain',
        icons: {
          "16": "icons/logo.png",
          "48": "icons/logo.png",
          "128": "icons/logo.png"
        },
        permissions: [
          'storage',
          'activeTab',
          'scripting',
          'tabs',
          'windows'
        ],
        host_permissions: [
          "http://localhost:*/*",
          "https://*/*"
        ],
        action: {
          default_popup: 'popup.html',
          default_icon: {
            "16": "icons/logo.png",
            "48": "icons/logo.png",
            "128": "icons/logo.png"
          }
        },
        background: {
          service_worker: 'src/background.js'
        },
        content_scripts: [
          {
            matches: ["http://localhost:*/*", "https://*/*"],
            js: ["src/contentScript.js"],
            run_at: "document_start"
          }
        ],
        web_accessible_resources: [
          {
            resources: ["src/provider.js"],
            matches: ["http://localhost:*/*", "https://*/*"]
          }
        ]
      }
    })
  ],
  resolve: {
    alias: {
      '@': srcDir,
    },
  },
  build: {
    outDir: 'dist',
    rollupOptions: {
      input: {
        background: path.resolve(__dirname, 'src/background.js'),
        contentScript: path.resolve(__dirname, 'src/contentScript.js'),
        provider: path.resolve(__dirname, 'src/provider.js')
      },
      output: {
        entryFileNames: 'src/[name].js'
      }
    }
  }
});```