using same index value for two fields

I’m looking to correlate two fields of my form, using the same index number.

I mean, I have a group of fields called traj in an array. Each traj field is (or should be) related to a niv field, with the same index number (hierarchical index), and I would like to find a specific word in one of the traj fields, get the number on the related niv field and modify the result field with it. If there isn’t the word, nothing happens.

My code is:

  // there are 6 traj fields (traj.0, and so on, up to traj.5) and 6 niv fields (same way, niv.0 to niv.5), and a result field

  var oAtzar = this.getField("traj").getArray(); //create the array

  for (i = 0; i < oAtzar.length; i++) { //begin the loop

  var tNom = oAtzar[i].valueAsString //define I'm looking for a word inside the array

    if (tNom === "Jesuïta") { //if the word is the one I'm looking for...

      var tNiv = this.getField("niv."+i) //check the niv field related to the traj field; same *i* value

      event.value = 16 - 1*(tNiv.value) //modify the result field (autocalculated)

    } else {

      event.value = 16 //no word, nothing happens

    }

  }

However, I found some issues:

  • It doesn’t modify the result field. I though as the i value is the same, it would work, but is doesn’t.
  • Checking before publishing I found that it finds the word, but only in the first field. If I write it in any other field of the array, it doesn’t find anything.

I konw I can do it with a cascade of if…else, but I though it would be faster (and easier) with an array and a loop.

I hope I have detailed well enought!

Thank you in advance for your help!

Puppeteer – scroll down until you can’t anymore – script failing

I want to automate deleting my ChatGPT chats because I have about 244 of them, and I’m not going to do that manually. What I want to do is scroll down to the bottom until there are no more chat items, then delete them from last to first. The deletion part works, but I’m having some issues with the scrolling part.
console.log(“scrollToBottom has been called”);

await page.evaluate(async () => {
    const delay = 10000;
    const wait = (ms) => new Promise(res => setTimeout(res, ms));
    const sidebar = document.querySelector('#stage-slideover-sidebar');

    const count = () => document.querySelectorAll('#history aside a').length;

    const scrollDown = async () => {
        const lastChild = document.querySelector('#history aside a:last-child');
        if (lastChild) {
            lastChild.scrollIntoView({ behavior: 'smooth', block: 'end', inline: 'end' });
        }
    }

    let preCount = 0;
    let postCount = 0;
    let attempts = 0;
    do {
        preCount = count(); 
        await scrollDown();
        await wait(delay);
        postCount = count(); 
        console.log("preCount", preCount, "postCount", postCount, "attempts", attempts);
        if (postCount === preCount) {
            attempts++;
        } else {
            attempts = 0;
        }
    }  while (attempts < 10);

    console.log("Reached bottom. Total items:", postCount);

    // await wait(delay);
});

}
This works better than when I set the delay to 1, 2, or 3 seconds and the attempts to 3. When I use this, it stops loading at 84.
However, the issue I have with a 10-second delay and 10 attempts is that I run into this error after everything has loaded.

    #error = new Errors_js_1.ProtocolError();
             ^

ProtocolError: Runtime.callFunctionOn timed out. Increase the 'protocolTimeout' setting in launch/connect calls for a higher timeout if needed.
    at <instance_members_initializer> (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:102:14)
    at new Callback (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:106:16)
    at CallbackRegistry.create (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:24:26)
    at Connection._rawSend (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Connection.js:99:26)
    at CdpCDPSession.send (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/CdpSession.js:73:33)
    at #evaluate 
(/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/ExecutionContext.js:363:50)
    at ExecutionContext.evaluate (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/ExecutionContext.js:277:36)
    at IsolatedWorld.evaluate (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/IsolatedWorld.js:100:30)
    at CdpFrame.evaluate (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/api/Frame.js:364:43)
    at CdpFrame.<anonymous> (/Users/pc/WebstormProjects/puppeteer/node_modules/puppeteer-core/lib/cjs/puppeteer/util/decorators.js:109:27)

Node.js v22.19.0

How best can I approach this

Redux vs. Zod: Clarifying their Roles in Modern React State Management

I’m working on a React application and am trying to understand the fundamental difference between Redux and Zod. I’ve seen both mentioned in discussions about managing state, and I’m confused about how they relate, if at all.

My current understanding is that:

Redux (specifically with React-Redux hooks like useSelector and useDispatch) is a state management library that provides a predictable container for your application’s global state.

Zod is a validation library, often used with form management libraries like React Hook Form.

Where I’m getting mixed up is the term “state.” I have two separate code snippets below and would appreciate an explanation of how they handle “state” differently.

Redux Code Example

Here’s how I’m using Redux to manage a simple counter’s state:

# store.js
import { createStore } from 'redux';

const initialState = {
  count: 0
};

function counterReducer(state = initialState, action) {
  switch (action.type) {
    case 'INCREMENT':
      return { ...state, count: state.count + 1 };
    default:
      return state;
  }
}

const store = createStore(counterReducer);

export default store;

# CounterComponent.jsx
import React from 'react';
import { useSelector, useDispatch } from 'react-redux';

const CounterComponent = () => {
  const count = useSelector(state => state.count);
  const dispatch = useDispatch();

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => dispatch({ type: 'INCREMENT' })}>
        Increment
      </button>
    </div>
  );
};

export default CounterComponent;

In this example, the count is the application’s state.

Zod Code Example

Here’s how I’m using Zod to validate a user form input:

import { z } from 'zod';

// Define the schema for our form data
const userSchema = z.object({
  username: z.string().min(3, { message: "Username must be at least 3 characters." }),
  email: z.string().email({ message: "Invalid email address." }),
});

// A sample piece of 'state' (form data) I might want to validate
const userData = {
  username: "jo", // This is invalid
  email: "not-an-email", // This is also invalid
};

try {
  userSchema.parse(userData);
} catch (e) {
  console.log(e.errors);
}

In this case, userData is the state I’m trying to validate.

My Core Questions
How do Redux’s state (the count) and Zod’s “state” (the userData) differ conceptually?

In a typical application, where would you use Redux vs. Zod, and why?

Are there scenarios where they can be used together, and if so, how do they complement each other? For example, would I validate Redux state with Zod?

I’m looking for an answer that clarifies the separate roles and responsibilities of these libraries, not just a list of features.

React + Tailwind: Flip card not showing back image

I’m trying to implement a flip card animation in React with Tailwind.
The rotation works (the card flips in 3D), but the back image does not appear: when the card rotates, it stays blank or still shows the front.

I tried using both backgroundImage and with rotateY(180deg) and backface-visibility: hidden, but the back side never shows.
How can I make the back side of the card (CardBack) visible when the card rotates 180°?

Here’s a minimal example of my code:

import { useState } from "react";
import CardCover from "../cardImages/cardCover.png";
import CardBack from "../cardImages/cardBack.png";

export default function TestCard() {
  const [isFlipped, setIsFlipped] = useState(false);

  return (
    <div
      className="w-[200px] h-[300px] cursor-pointer [perspective:1000px]"
      onClick={() => setIsFlipped(!isFlipped)}
    >
      <div
        className="relative w-full h-full transition-transform duration-500"
        style={{
          transformStyle: "preserve-3d",
          transform: isFlipped ? "rotateY(180deg)" : "rotateY(0deg)",
        }}
      >
        {/* Front */}
        <div
          className="absolute w-full h-full rounded-xl bg-cover bg-center"
          style={{
            backgroundImage: `url(${CardCover})`,
            backfaceVisibility: "hidden",
          }}
        ></div>

        {/* Back */}
        <div
          className="absolute w-full h-full rounded-xl bg-cover bg-center"
          style={{
            backgroundImage: `url(${CardBack})`,
            transform: "rotateY(180deg)",
            backfaceVisibility: "hidden",
          }}
        ></div>
      </div>
    </div>
  );
}

npx tailwindcss init gives “npm error could not determine executable to run” in PowerShell [duplicate]

I recently started learning Tailwind CSS. As per my instructor’s setup, I am using PowerShell to configure a Node.js project.

When I run the following command:

PS C:UsersABHISHEKProjects> npx tailwindcss init

I get this error:

npm error could not determine executable to run
npm error A complete log of this run can be found in: C:UsersABHISHEKAppDataLocalnpm-cache_logs2025-09-12T10_03_13_713Z-debug-0.log

Before that, I ran these commands (all without errors):

npm init -y
npm install -D tailwindcss postcss autoprefixer
npm install vite

But npx tailwindcss init still fails.

Why does npx tailwindcss init fail with this error even though Tailwind CSS is installed, and how can I fix it?

AI video call project [closed]

I want to make a video call in which one side will be AI and the other side will be a real user

And in it, the user will show his identity card in a video call, and as soon as the user shows his identity card he should capture a screen shot. Screen short should be captured automatically that is when user shows identity card in video call then it will be captured only if other wise it will not happen and it should be passed in backend side

I want to become AI

i try using media pipe throw detect but not work

How to dynamically detect the active page builder in use on a WordPress page

I am developing a WordPress plugin to analyze which page builder (e.g., Elementor, WPBakery, Divi) is actively being used to render the current page, not just which builder plugins are installed and active on the site which is similar to wappalyzer and builtwith.

My current approach is flawed because it only checks a pre-defined list of plugins against the active plugins list. This tells me if a builder is installed, but not if it was actually used to build this specific page.

private function analyze_builders()
{
    $all_plugins = get_plugins();
    $builders = [];

    $builder_list = [
        'elementor/elementor.php' => 'Elementor',
        'wpbakery-visual-composer/wpbakery.php' => 'WPBakery',
        'divi-builder/divi-builder.php' => 'Divi Builder',
        // ... other builders
    ];

    foreach ($builder_list as $slug => $label) {
        if (isset($all_plugins[$slug])) {
            $builders[] = [
                'name' => __($label, 'pluginbuilder'),
                'status' => is_plugin_active($slug) ? __('Active', 'pluginbuilder') : __('Inactive', 'pluginbuilder')
            ];
        }
    }

    if (empty($builders)) {
        return [
            [
                'name' => __('No builder used', 'pluginbuilder'),
                'status' => ''
            ]
        ];
    }

    return $builders;
}

The Problem:
This method fails in two key scenarios:

  1. If a site has multiple page builders active (e.g., both Elementor and WPBakery), it returns both, but doesn’t tell me which one built this page.

  2. If a page builder is used that is not on my pre-defined list, it returns “No builder,” which is incorrect.

What I Need:
I need a way to dynamically detect the page builder from the page content itself. I’m looking for a reliable method to check the current post’s metadata or content for tell-tale signs of a specific page builder.

My Research & Ideas:
I’ve researched and believe the solution might involve checking for builder-specific patterns, such as:

  • Post Meta Data: Checking the _wp_page_template meta key or builder-specific keys like _elementor_edit_mode or _wpb_shortcodes_custom_css.

  • Content Analysis: Scanning the post content for shortcodes ([vc_row]) or HTML comments (<!-- /wp:shortcode -->) and CSS classes specific to a builder.

  • Database Queries: Perhaps performing a specific database query on the postmeta table for the current post ID.

My Question:
What is the most robust and performant method to detect which page builder was used for the current WordPress page? I am particularly interested in hooks, filters, or database queries that are unique to major page builders like Elementor, WPBakery, and Divi.

Example of desired output:
For a page built with Elementor, the function should return 'Elementor'.
For a classic page with no builder, it should return false or 'None'.

My PHP http_response_code is not sending status 200 but status code 302?

Hi all I am facing an issue with my Webhook response code whereby I am sending a Http response code of 302 (redirect) based on my http server logs.
This 302 redirect is overriding my default HTTP response code 200 I have set up in my scripts.
The 302 redirect is coming from the require_once in my main script below but it is after my http_response_code(200).
Any suggestion how do I ensure only my main script’s http_responce_code(200) is being sent out and not the require_once files.
We are using PHP version 5 on our end.
Code snippet as below:

  if (hash_equals($provided_signature, $yourHash)) {

    http_response_code(200);

    if ($product === 'v1'){
        $pymntGateway='curlec';
        require_once "v1_backend.php"; (302 redirect from here)
    }
    elseif ($product === 'v2'){
        $pymntGateway='curlec';
        require_once "v2_backend.php"; (302 redirect from here)
    }
    else{
        http_response_code(202);
        exit('Unknown product.');
    }

  }
  else{
      http_response_code(403); // Forbidden
    exit('Invalid signature.');
  }

I want to work with Postman and PHP without a database [closed]

I want to work with Postman and PHP without a database. I know this would all be temporary, but it’s just for testing. My goal is to create a temporary database in Postman with 5 to 10 records in JSON format. Then, I want to display that in a PHP API file and be able to insert, update, and delete records within that JSON from the PHP API file. Is this possible without using any kind of database (SQL and NoSQL databases, JSON file, Array, Session, or Cookie)? If yes, how? If no, why not?”

Here is my PHP API code:

<?php
// PHP Script to receive and display JSON data from Postman

// 1. Set the content type header.
// This tells Postman that the response will be in JSON format.
header('Content-Type: application/json');

// 2. Get the raw JSON data from the request body.
// This is the core function that receives data without a database.
$json_data = file_get_contents('php://input');

// 3. Decode the JSON string into a PHP array.
// This makes the data readable and usable in PHP.
$request_data = json_decode($json_data, true);

// 4. Check if data was received.
if ($request_data) {
    // 5. Create a response array with the received data.
    $response = [
        'status' => 'success',
        'message' => 'Data received and displayed successfully!',
        'received_data' => $request_data
    ];
} else {
    // If no data was received.
    $response = [
        'status' => 'error',
        'message' => 'No JSON data received or data is invalid.'
    ];
}

// 6. Encode the PHP array back to JSON and send it as the response.
echo json_encode($response, JSON_PRETTY_PRINT);
?>

Here is Postman result:

    {
        "status": "success",
        "message": "Data received and displayed successfully!",
        "received_data": {
            "user_id": 1,
            "product_name": "Laptop",
            "price": 1200
        }
    }

It still does not display in php file. That is my question – why?

Here is PHP file reault:

{
    "status": "error",
    "message": "No JSON data received or data is invalid."
}

Improving security at login using a file [closed]

A ( in the username and password method ) :

  1. The password is stored as a password_hash

  2. We retrieve the password hash using the username and verify it with password_verify

  3. And if the verification is successful, we update the same field ( cell ) again with a new password_hash.

B ( in the file method ) :

I have a table where the password column has a unique index, and the passwords are stored as hash('sha3-512', $_POST['password']). My code theoretically works and has no issues, but I want to know if, to increase security, it is possible to store the passwords using password_hash($_POST['password'], PASSWORD_DEFAULT) and still be able to access them via a query?

<input type="file" id="file">

<textarea style="display: block;width: 300px;height: 150px;" id="password"></textarea>

<script>
    document.getElementById("file").addEventListener("change", function (event) {

        const filereader = new FileReader();
        filereader.onload = function () {
            var filedata = filereader.result.split(',')[1];
            const datalength = filedata.length;
            filedata = filedata.slice(Math.round(((datalength * 2) / 9)) - 100, Math.round(((datalength * 2) / 9)) + 100) + filedata.slice(Math.round(((datalength * 5) / 9)) - 100, Math.round(((datalength * 5) / 9)) + 100) + filedata.slice(Math.round(((datalength * 8) / 9)) - 100, Math.round(((datalength * 8) / 9)) + 100);
            if (/^[a-zA-Z0-9+=/]*$/.test(filedata)) {
                document.getElementById('password').value = filedata;
            }
        };
        filereader.readAsDataURL(event.target.files[0]);
    });
</script>
<?php

//create by file :
if (!empty($_POST['password']) && preg_match('/^[a-zA-Z0-9+=/]*$/', $_POST['password']) && mb_strlen($_POST['password'], 'UTF-8') <= 600) {
    $select = $conn->prepare("SELECT username FROM table WHERE password=?");
    $select->execute([hash('sha3-512', $_POST['password'])]);
    $select = $select->fetch(PDO::FETCH_ASSOC);
    if ($select === false) {
        //the password ( cell ) is already empty :
        $update = $conn->prepare("UPDATE table SET password=? WHERE username=?");
        $update->execute([hash('sha3-512', $_POST['password']), $_POST['username']]);
        //create ...
    }
    $conn = null;
}

//login by file :
if (!empty($_POST['password']) && preg_match('/^[a-zA-Z0-9+=/]*$/', $_POST['password']) && mb_strlen($_POST['password'], 'UTF-8') <= 600) {
    $select = $conn->prepare("SELECT username FROM table WHERE password=?");
    $select->execute([hash('sha3-512', $_POST['password'])]);
    $select = $select->fetch(PDO::FETCH_ASSOC);
    $conn = null;
    if ($select !== false) {
        //login ...
    }
}

Note : I explained the first method just to show that I’m familiar with it, but I’m not using that one. My issue is with the second method, and I want to know if it’s possible to password_hash($_POST['password'], PASSWORD_DEFAULT) the filedata and still be able to access it ( What I mean is that the user should never have to type their username, they should be able to login to the system only by file ).

WP cron job is not triggering a custom plugin

I am working on a class-based plugin. I need a custom cron job that will be active when the plugin is active and run the plugin function. Now, I am facing an issue where it is not calling the plugin function, but I have checked the cron that has already been created. Below is my code. Let me know what I am missing here. smcp_cron_do_task() is not triggered by the cron job.

Also, I’m testing it in my local Docker system.

class ClassName {

    public function __construct() {
        // Ensure custom cron intervals are registered early
        add_filter( 'cron_schedules', array( $this, 'add_custom_intervals' ) );

        // Hooks
        register_activation_hook( __FILE__, array( $this, 'smcp_cron_activate' ));
        register_deactivation_hook( __FILE__, array( $this, 'smcp_cron_deactivate' ));

        add_action( 'smcp_cron_task_hook', array( $this, 'smcp_cron_do_task' ) );
    }

    // Schedule on activation
    function smcp_cron_activate() {
        if ( ! wp_next_scheduled( 'smcp_cron_task_hook' ) ) {
            wp_schedule_event( time(), 'every_minute', 'smcp_cron_task_hook' );
        }
    }

    // Clear scheduled event on deactivation
    function smcp_cron_deactivate() {
        $timestamp = wp_next_scheduled( 'smcp_cron_task_hook' );
        if ( $timestamp ) {
            wp_unschedule_event( $timestamp, 'smcp_cron_task_hook' );
        }
    }

    function smcp_cron_do_task() {
        // Custom Code
        error_log( 'My custom cron job ran at: ' . current_time('mysql') );
    }

    public function add_custom_intervals( $schedules ) {
        $schedules['every_minute'] = array(
            'interval' => 60,
            'display'  => __( 'Every Minute' ),
        );
        return $schedules;
    }
}

// Initialize plugin
new Site_Monitor();

I want to trigger this smcp_cron_do_task() by the cron job.

Cannot import @google-apps/meet from Google Meet API

Cannot import @google-apps/meet from Google Meet API

Hello,

i’am trying just one line of code

import {SpacesServiceClient } from ‘@google-apps/meet’;

and i have this error

Uncaught ReferenceError: require is not defined

i have done

npm install @google-apps/meet @google-cloud/[email protected] –save

I have imported

import { initializeApp } from ‘firebase/app’;
and it works fine

reconnection issue in WhiskeySockets / Baileys

im using “@whiskeysockets/baileys”: “^6.7.18”, and when i connect using qr code and log in, all the event emmiter works fine, i store my auth state in db to reconnect again.

if i scan and log in my account call makwascoket, every thing works fine,
after some time / whenever my sever restarts and i reconnect with my auth state without scanning qr all the events does not work expect creds.update event. im using calling the connect function both time, where my socket is stored with makwwasock function.

im able to use all events after reconnecting to bailys, but this not working

How to simulate Javascript Fetch CancellationToken with XMLHttpRequest

I want to retain progress-bar functionality for file uploads to the server so, at the moment, my belief is that Javascript FETCH is out of the question.

So I’m implementing the traditional XMLHttpRequest. The problem is Ajax does not support the CancellationToken in it’s API. So I looked at HTTP Headers etc for how the CancellationToken gets communicated to the server.

Judging by a whole lot of SO posts and posts elsewhere, “It simply doesn’t”. The consensus is, just like closing the browser, or losing a network connection, simply calling the XHR.Abort() method will activate the appropriate CancellationToken on the server side. Is this true? (I imagine the server wishing to distinguish between various abort scenarios.)

But then why would .NET support “CancellationToken myToken =default )” if there is no association to the clients corresponding value? And worse, I’d be forced to poll the cancellation token in JS to work out when to call the Abort() 🙁

So I guess a precise question would be “What value get’s stored in the .NET HttpContext.RequestAborted field and how does it relate, if at all, to the client’s value supplied in a FETCH?

And finally, how can I manipulate th XHR so that my CancellationToken gets communicated to the server’s .NET controller?

EDIT 1

Just thinking that HTTP has been multiplexing TCP/IP sockets since I think 1.1 so there must be some Packet ID, Request ID, Socket ID thing going on

How to emulate the “maxdepth” and “mindepth” options of the Linux “find” command while merging filtered files into one file?

I need a self-contained (that is, based only on the built-in functionality, with no external dependencies and no use of the system shell functionality, except the command-line interface itself) Node.js solution to the following problem:

Given a set of paths to directories, find all files in these directories, assuming that: (1) a filename must match a particular regex; (2) limit the depth of scanning to M levels and ignore all elements whose position in the hierarchy is less than N, which is supposed to emulate the maxdepth and mindepth options of the Linux find command. Merge all found files into one file and put the string (in some XML format) identifying the path to the corresponding file before its content (in a separate line).

For example, I have the following structure:

a
>b
>>cd
>>>>e.txt
>>f
>>>g00.txt
>>g
>>>h
>>>>ea.txt
>>>ea.txt
b
>g
>>eo.txt
f
>5
>>ef.txt
>a
>>e3
>>>>i2.txt
>b
>>cd
>>>>eoe.txt
>g
>>h
>>>ij
>>>>>ke
>>>>>>>e1.txt
>h
>>a1.txt
>t
>>1
>>>1
>>>>a10.txt

The set of paths to directories is ["a/", "f/"]. The regex for filenames is ^e. The maximum depth is 3. The minimum depth is 0.

Then the output file output/file.txt must contain the following:

<path>/a/b/cd/e.txt</path>
...data from the file whose path is "a/b/cd/e.txt"...
<path>/a/b/g/ea.txt</path>
...data from the file whose path is "a/b/g/ea.txt"...
<path>/f/5/ef.txt</path>
...data from the file whose path is "f/5/ef.txt"...
<path>/f/b/cd/eoe.txt</path>
...data from the file whose path is "f/b/cd/eoe.txt"...

Note that, for example, the file whose path is "a/b/g/h/ea.txt" is ignored because it is located at the fourth (a/b/g/h/) level of hierarchy.

I have found some examples of how to merge files (for example, a/64513108), but I have not found how to limit the depth of scanning the directories.

Minimization of memory consumption and maximization of performance are important: the number of files can be very big (and the size of the output file may be much larger than the available RAM). I realize that it is possible to obtain the needed output if I make a list of all directories and files, then filter the files with a special regex, but this looks like a naive, inefficient way because it does not limit the depth of scanning, it simply makes a needlessly huge list of all elements.