HTML canvas game, painter algorithm

I x number of days I’m offering the maximum bounty because I have been stuck on this single problem for 6+ months. I keep thinking I have it close just for an edge case to bite me in the ___ later. I need to find a solution that will always work…

The Objects
The only objects that exist in the world are polygons with a height. The polygon is defined by a position, with vertices as offset points from its position:

{
   position: {
      x: null,
      y: null,
      z: null
   },
   hitbox: {
      vertices: [
         [x,y],[x,y],[x,y]
      ],
      height: null,
      search_distance: null, // maximum possible distance across a polygon
   },
   max_vertice_y: null, // furthest top vertice y position
   min_vertice_y: null // lowest bottom vertice y position
}

I have an algorithm that splits polygons into smaller polygons that prevent the scenario of a polygon being in front of, and behind another polygon. So for this example lets assume that all polygons will only exist in front of or behind any other.

Contrary to standard positioning, this world treats lower y values as being at the bottom of the screen and larger values being further towards the top

The Problem
It appears that every sort method I have come up with in the past and the current one I show below all share the same issue … a polygon (A) may be entirely below another (B), and therefore printed before it. However, later in the list a polygon (C) is found that is printed before (B), and after (A) because of the z position and height of (C) paired with the y position of (A). I can’t seem to create a method that preserves the truth of one polygon compared to another without affected the outcome of polygons sorted after it. I’m researching a topological sort at the moment but I can’t seem to define what a “dependency” is. I feel each polygon only has a list of everything before and after, which can change later if it appears before a polygon that itself appears before something in the “after” list of its predecessor.

Because of this I have tried a variation of merge sort, but my truths (conditions) break because of the situation mentioned above.

enter image description here

My most recent approach

function sortTerrain() {
    // terrain_ground and terrain_decor are objects with the properties mentioned above
    
    // create working "grab from list"
    let comp_work = [...terrain_ground, ...terrain_decor];
    
    // create sorted list
    let working = [comp_work.shift()];
    
    // while grab list isnt empty, take from and insert into sorted list
    while (comp_work.length > 0) {
        let current = comp_work.shift();
        
        // get index of insert position
        let insert = -1;
        
        // start from the furthest forward printed object in the sorted list and work backwards until a truth is found
        // **this was originally working from the start to end under the condition of when a truth is not found, exit and use last saved index ... this WORKS but, not as well as looking for a single truth. both still create the issue mentioned above
        for (let i=working.length-1; i>=0; i--) {
            let compare = working[i];
            
            if (sameZPlane(current, compare)) {
                // on same z plane, sort by y
                
                // if guaranteed infront of
                if (current.max_vertice_y < compare.min_vertice_y) {
                    insert = i;
                    break;
                }
                
                // if overlap on y
                if (current.min_vertice_y < compare.max_vertice_y && current.max_vertice_y > compare.min_vertice_y) {
                    
                    // check for ray cast to determine current is in front of compare
                    // also check rear-forward using rayCastInfrontOf(x, x, true) in case rear is smaller than the gap between a vertice pair of the front polygon
                    if (rayCastInfrontOf(current, compare) || rayCastInfrontOf(compare, current, true)) {
                        insert = i;
                        break;
                    } else if (current.min_vertice_y < compare.min_vertice_y) {
                        // if not, then check if current min is in front of compare min
                        insert = i;
                        break;
                    }
                    
                }
                
            } else {
                
                // sort by z index
                if (current.position.z > compare.position.z) {
                    insert = i;
                    break;
                } else if (current.position.z == compare.position.z) {
                    // shared z position yet failing sameZPlane is a 0 height terrain, sort by height 
                    // THIS IS UNLIKELY TO CONTRIBUTE TO ANY PRINT ERRORS
                    if (current.hitbox.height > compare.hitbox.height) {
                        insert = i;
                        break;
                    }
                }
                
            }

        }
        
        // no insert location found, add at start of the stack
        if (insert == -1) {
            working.unshift(current);
        } else {
            // insert after last found index
            working.splice(insert+1, 0, current);
        }

    }
}

function expandTerrainVertices() {
    // can of worms
}

function rayCastInfrontOf(front, rear, reverse = false) {
    // allow reverse check by sending truth boolean to reverse param
    // safe ray length
    let ray = (front.hitbox.search_distance + rear.hitbox.search_distance) * (reverse ? -1 : 1);
    
    // shrink vertices of "front" so that rays casted will not give false positive to polygons sharing a vertice point + position value
    let shrink_front_vertices = expandTerrainVertices(front, -1);
    
    // run ray from all vertice points of front backwards (or rear forwards if reverse param is true) and see if it intersects any vertice connections in rear
    // "middle" is used to get next safe vertice pair
    let middle = 0;
    for (let i=0; i<shrink_front_vertices.length; i++) {
        middle = i+1;
        if (middle == shrink_front_vertices.length) {
            middle = 0;
        }
        
        // cast from middle point of vertices rather than from the vertice position itself, this is safer
        let middle_point = {
            x: (front.position.x + shrink_front_vertices[i][0] + front.position.x + shrink_front_vertices[middle][0])/2,
            y: (front.position.y + shrink_front_vertices[i][1] + front.position.y + shrink_front_vertices[middle][1])/2
        }
        
        // loop rear vertice pairs
        let next = 0;
        for (let i2=0; i2<rear.hitbox.vertices.length; i2++) {
            next = i2+1;
            if (next == rear.hitbox.vertices.length) {
                next = 0;
            }
            
            // check if ray cast from the middle position of the "front" polygon intercepts any vertice pairs on "rear" polygon
            if (intercepts(
                middle_point.x, middle_point.y,
                middle_point.x, middle_point.y + ray,
                rear.position.x + rear.hitbox.vertices[i2][0], rear.position.y + rear.hitbox.vertices[i2][1],
                rear.position.x + rear.hitbox.vertices[next][0], rear.position.y + rear.hitbox.vertices[next][1]
            )) {
                return true;
            }
        }
    }
    
    return false;
}

// get intersection point of two lines
function intercepts(x1,y1,x2,y2,x3,y3,x4,y4) {
    let s1_x = x2 - x1
    let s1_y = y2 - y1
    let s2_x = x4 - x3
    let s2_y = y4 - y3
    let s = (-s1_y * (x1 - x3) + s1_x * (y1 - y3)) / (-s2_x * s1_y + s1_x * s2_y);
    let t = ( s2_x * (y1 - y3) - s2_y * (x1 - x3)) / (-s2_x * s1_y + s1_x * s2_y);
    if (s >= 0 && s <= 1 && t >= 0 && t <= 1) {
        return [precise(x1 + (t * s1_x)), precise(y1 + (t * s1_y))]
    }
    return false
}

function sameZPlane(a, b) {
    let a_top = precise(a.position.z + a.hitbox.height);
    let b_top = precise(b.position.z + b.hitbox.height);
    if (
        (a_top > b.position.z && a.position.z < b_top) ||
        (b_top > a.position.z && b.position.z < a_top)
    ) {
        return true;
    }
    return false;
}

My PokeAPI is getting Fetch Error, but why? [closed]

I built this pokemon search engin for freecodecamp.org and it passed the tests so not I am trying to improve it but I don’t know all the pokemon so I added an API called poke.api I can access it fine from my browser but it throws an error when I search for pikachu.

const searchInput = document.getElementById('search-input');
const searchButton = document.getElementById('search-button');
const pokemonInfo = document.getElementById('pokemon-info'); // Use this for appending elements
const pokemonName = document.getElementById('pokemon-name');
const pokemonId = document.getElementById('pokemon-id');
const weight = document.getElementById('weight');
const height = document.getElementById('height');
const types = document.getElementById('types');
const hp = document.getElementById('hp');
const attack = document.getElementById('attack');
const defense = document.getElementById('defense');
const specialAttack = document.getElementById('special-attack');
const specialDefense = document.getElementById('special-defense');
const speed = document.getElementById('speed');
const sprite = document.getElementById('sprite');


searchButton.addEventListener('click', async () => {
    const searchTerm = searchInput.value.toLowerCase();
    clearPokemonInfo(); // Clear previous data before searching

    const loadingMessage = document.createElement('p');
    loadingMessage.textContent = 'Loading...';
    pokemonInfo.appendChild(loadingMessage);

    try {
        const response = await fetch(`https://pokeapi.co/api/v2/pokemon/${searchTerm}`);
        if (!response.ok) {
            throw new Error(`Pokémon not found (HTTP status: ${response.status})`);
        }
        const pokemonData = await response.json();
        validatePokemonData(pokemonData, searchTerm);
        await displayPokemonInfo(pokemonData); //Await the display
    } catch (error) {
        const errorMessage = document.createElement('p');
        errorMessage.textContent = `Error: ${error.message}`;
        errorMessage.style.color = 'red';
        pokemonInfo.appendChild(errorMessage);
    } finally {
      pokemonInfo.removeChild(loadingMessage); //Remove loading message
    }
});

async function displayPokemonInfo(pokemonData) {
    pokemonName.textContent = pokemonData.name.toUpperCase();
    pokemonId.textContent = pokemonData.id;
    weight.textContent = pokemonData.weight;
    height.textContent = pokemonData.height;
    hp.textContent = pokemonData.stats.find(stat => stat.stat.name === 'hp').base_stat;
    attack.textContent = pokemonData.stats.find(stat => stat.stat.name === 'attack').base_stat;
    defense.textContent = pokemonData.stats.find(stat => stat.stat.name === 'defense').base_stat;
    specialAttack.textContent = pokemonData.stats.find(stat => stat.stat.name === 'special-attack').base_stat;
    specialDefense.textContent = pokemonData.stats.find(stat => stat.stat.name === 'special-defense').base_stat;
    speed.textContent = pokemonData.stats.find(stat => stat.stat.name === 'speed').base_stat;
    sprite.src = pokemonData.sprites.front_default;


    //Improved type display
    const typesElement = document.createElement('p');
    typesElement.innerHTML = 'Types: ';
    pokemonData.types.forEach(type => {
        const typeIcon = document.createElement('img');
        typeIcon.src = `type-icons/${type.type.name}.png`; //Requires type-icons folder with images
        typeIcon.alt = type.type.name;
        typeIcon.width = '20';
        typeIcon.height = '20';
        typesElement.appendChild(typeIcon);
        typesElement.appendChild(document.createTextNode(type.type.name.toUpperCase() + ' '));
    });
    pokemonInfo.appendChild(typesElement);

    // Fetch and display abilities - more robust handling of errors and multiple abilities
    const abilities = await Promise.all(pokemonData.abilities.map(async ability => {
        const response = await fetch(ability.url);
        if (!response.ok) {
            console.error(`Error fetching ability: ${ability.url}`, response.status);
            return null; //Handle failed ability fetch gracefully
        }
        const abilityData = await response.json();
        return abilityData.name; // Return just the name of the ability
    }));
    const abilitiesList = abilities.filter(a => a !== null).join(', '); //Filter out null values before joining.
    const abilitiesElement = document.createElement('p');
    abilitiesElement.innerHTML = `Abilities: <span>${abilitiesList}</span>`;
    pokemonInfo.appendChild(abilitiesElement);


}


function validatePokemonData(pokemonData, searchTerm) {
    // Add your validation logic here based on the test cases. For example:
    if (searchTerm === 'pikachu') {
        // ... (Your Pikachu validation code) ...
    } else if (searchTerm === '94') {
        // ... (Your Gengar validation code) ...
    }
    //Add other validations as needed.
}


function clearPokemonInfo() {
    while (pokemonInfo.firstChild) { //Completely clear the pokemonInfo div.
      pokemonInfo.removeChild(pokemonInfo.firstChild);
    }
}

I tried searching for Pikachu on the api itself and it does work so there is defiantly something wrong on my end.
Error is ” Error: Failed to fetch”

Custom Post Type WordPress, issue saving Image Gallery

I’m working on a custom post type in WordPress for “Projects” that includes three custom meta boxes: one for a project title, one for a project description, and one for an image gallery. The gallery meta box allows me to add images using the WordPress media uploader. The images display correctly in the meta box immediately after being added, but once I update the project and refresh the page, the gallery images vanish.

Here’s a summary of my setup:

Custom Post Type: “progetti” (Projects) with support for featured images.
Meta Boxes:
Project Title: Uses wp_editor and saves its value as a meta field (_progetto_titolo).
Project Description: Also uses wp_editor (saved as _progetto_descrizione).
Gallery: Stores a comma-separated list of attachment IDs in a meta field (_progetto_gallery). It includes JavaScript that adds and removes images from the gallery.
I initially updated the actual post title by calling wp_update_post() inside my save_post hook after saving the custom title. I suspected this was triggering a second save cycle that might be wiping out other meta values (like the gallery). So I tried a couple of approaches:

Temporarily removing the save_post hook before calling wp_update_post(), then re-adding it.
Using the wp_insert_post_data filter to update the title before the post is saved.
I even consolidated my nonce usage into a single nonce field to avoid potential conflicts. Despite these attempts, the gallery meta field still ends up empty after the post is updated and reloaded.

Snippet below:

// Registra il Custom Post Type "Progetti"
function create_custom_post_type_progetti() {
    $labels = array(
        'name'                  => _x( 'Progetti', 'Post Type General Name', 'textdomain' ),
        'singular_name'         => _x( 'Progetto', 'Post Type Singular Name', 'textdomain' ),
        'menu_name'             => __( 'Progetti', 'textdomain' ),
        'name_admin_bar'        => __( 'Progetto', 'textdomain' ),
        'archives'              => __( 'Archivio Progetti', 'textdomain' ),
        'attributes'            => __( 'Attributi Progetto', 'textdomain' ),
        'parent_item_colon'     => __( 'Progetto Genitore:', 'textdomain' ),
        'all_items'             => __( 'Tutti i Progetti', 'textdomain' ),
        'add_new_item'          => __( 'Aggiungi Nuovo Progetto', 'textdomain' ),
        'add_new'               => __( 'Aggiungi Nuovo', 'textdomain' ),
        'new_item'              => __( 'Nuovo Progetto', 'textdomain' ),
        'edit_item'             => __( 'Modifica Progetto', 'textdomain' ),
        'update_item'           => __( 'Aggiorna Progetto', 'textdomain' ),
        'view_item'             => __( 'Visualizza Progetto', 'textdomain' ),
        'view_items'            => __( 'Visualizza Progetti', 'textdomain' ),
        'search_items'          => __( 'Cerca Progetto', 'textdomain' ),
        'not_found'             => __( 'Non trovato', 'textdomain' ),
        'not_found_in_trash'    => __( 'Non trovato nel cestino', 'textdomain' ),
        'featured_image'        => __( 'Immagine di copertina', 'textdomain' ),
        'set_featured_image'    => __( 'Imposta immagine di copertina', 'textdomain' ),
        'remove_featured_image' => __( 'Rimuovi immagine di copertina', 'textdomain' ),
        'use_featured_image'    => __( 'Usa come immagine di copertina', 'textdomain' ),
        'insert_into_item'      => __( 'Inserisci nel progetto', 'textdomain' ),
        'uploaded_to_this_item' => __( 'Caricato in questo progetto', 'textdomain' ),
        'items_list'            => __( 'Lista progetti', 'textdomain' ),
        'items_list_navigation' => __( 'Navigazione lista progetti', 'textdomain' ),
        'filter_items_list'     => __( 'Filtra lista progetti', 'textdomain' ),
    );
    $args = array(
        'label'                 => __( 'Progetto', 'textdomain' ),
        'description'           => __( 'Custom post type per progetti', 'textdomain' ),
        'labels'                => $labels,
        // Supporta l'immagine in evidenza; per titolo e contenuto usiamo metabox personalizzati
        'supports'              => array( 'thumbnail' ),
        'hierarchical'          => false,
        'public'                => true,
        'show_ui'               => true,
        'show_in_menu'          => true,
        'menu_position'         => 5,
        'menu_icon'             => 'dashicons-portfolio',
        'show_in_admin_bar'     => true,
        'show_in_nav_menus'     => true,
        'can_export'            => true,
        'has_archive'           => true,        
        'exclude_from_search'   => false,
        'publicly_queryable'    => true,
        'capability_type'       => 'post',
        'show_in_rest'          => true,
    );
    register_post_type( 'progetti', $args );
}
add_action( 'init', 'create_custom_post_type_progetti', 0 );


// Enqueue delle funzioni necessarie per il media uploader in admin
function progetti_enqueue_admin_scripts( $hook ) {
    global $post;
    if ( ( $hook == 'post-new.php' || $hook == 'post.php' ) && isset( $post ) && $post->post_type === 'progetti' ) {
        wp_enqueue_media();
    }
}
add_action( 'admin_enqueue_scripts', 'progetti_enqueue_admin_scripts' );


// Aggiunge le Meta Box per i campi personalizzati
function progetti_add_meta_boxes() {
    add_meta_box( 'progetti_titolo', 'Titolo Progetto', 'progetti_titolo_meta_box_callback', 'progetti', 'normal', 'high' );
    add_meta_box( 'progetti_descrizione', 'Descrizione Progetto', 'progetti_descrizione_meta_box_callback', 'progetti', 'normal', 'high' );
    add_meta_box( 'progetti_gallery', 'Gallery Immagini', 'progetti_gallery_meta_box_callback', 'progetti', 'normal', 'high' );
}
add_action( 'add_meta_boxes', 'progetti_add_meta_boxes' );


// Meta box per il Titolo: include il nonce
function progetti_titolo_meta_box_callback( $post ) {
    wp_nonce_field( 'progetti_save_meta_box_data', 'progetti_meta_box_nonce' );
    $value = get_post_meta( $post->ID, '_progetto_titolo', true );
    wp_editor( $value, 'progetto_titolo', array(
        'textarea_name' => 'progetto_titolo',
        'media_buttons' => false,
        'textarea_rows' => 5,
    ) );
}

// Meta box per la Descrizione: non include il nonce
function progetti_descrizione_meta_box_callback( $post ) {
    $value = get_post_meta( $post->ID, '_progetto_descrizione', true );
    wp_editor( $value, 'progetto_descrizione', array(
        'textarea_name' => 'progetto_descrizione',
        'media_buttons' => false,
        'textarea_rows' => 10,
    ) );
}

// Meta box per la Gallery: non include il nonce
function progetti_gallery_meta_box_callback( $post ) {
    $gallery = get_post_meta( $post->ID, '_progetto_gallery', true );
    ?>
    <div id="progetti-gallery-container">
        <ul id="progetti-gallery-list">
        <?php
        if ( ! empty( $gallery ) ) {
            $gallery_ids = explode( ',', $gallery );
            foreach ( $gallery_ids as $attachment_id ) {
                $img_url = wp_get_attachment_thumb_url( $attachment_id );
                if ( $img_url ) {
                    echo '<li data-attachment_id="' . esc_attr( $attachment_id ) . '">
                            <img src="' . esc_url( $img_url ) . '" style="max-width:100px; margin:0 10px 10px 0;"/>
                            <a href="#" class="progetti-remove-image">Rimuovi</a>
                          </li>';
                }
            }
        }
        ?>
        </ul>
        <input type="hidden" id="progetti_gallery" name="progetti_gallery" value="<?php echo esc_attr( $gallery ); ?>" />
    </div>
    <p>
        <a href="#" id="progetti-add-gallery-images" class="button">Aggiungi Immagini</a>
    </p>
    <script>
    jQuery(document).ready(function($){
        var frame;
        $('#progetti-add-gallery-images').on('click', function(e){
            e.preventDefault();
            if ( frame ) {
                frame.open();
                return;
            }
            frame = wp.media({
                title: 'Seleziona Immagini per la Gallery',
                button: {
                    text: 'Aggiungi alla gallery'
                },
                multiple: true
            });
            frame.on( 'select', function() {
                var attachments = frame.state().get('selection').toArray();
                var gallery_ids = $('#progetti_gallery').val();
                var ids = gallery_ids ? gallery_ids.split(',') : [];
                attachments.forEach(function(attachment){
                    attachment = attachment.toJSON();
                    if ( ids.indexOf(attachment.id.toString()) === -1 ) {
                        ids.push(attachment.id);
                        var img_url = attachment.sizes.thumbnail ? attachment.sizes.thumbnail.url : attachment.url;
                        $('#progetti-gallery-list').append('<li data-attachment_id="'+attachment.id+'"><img src="'+img_url+'" style="max-width:100px; margin:0 10px 10px 0;"/><a href="#" class="progetti-remove-image">Rimuovi</a></li>');
                    }
                });
                $('#progetti_gallery').val(ids.join(','));
            });
            frame.open();
        });
        // Rimuove l'immagine dalla gallery
        $('#progetti-gallery-list').on('click', '.progetti-remove-image', function(e){
            e.preventDefault();
            $(this).closest('li').remove();
            var ids = [];
            $('#progetti-gallery-list li').each(function(){
                ids.push($(this).data('attachment_id'));
            });
            $('#progetti_gallery').val(ids.join(','));
        });
    });
    </script>
    <?php
}

add_filter( 'wp_insert_post_data', 'progetti_set_custom_title', 10, 2 );
function progetti_set_custom_title( $data, $postarr ) {
    if ( isset( $postarr['post_type'] ) && $postarr['post_type'] === 'progetti' ) {
        if ( isset( $_POST['progetto_titolo'] ) ) {
            $custom_title = wp_strip_all_tags( $_POST['progetto_titolo'] );
            $data['post_title'] = $custom_title;
        }
    }
    return $data;
}

function progetti_save_meta_box_data( $post_id ) {
    // Verifica il nonce (presente solo nel meta box del Titolo)
    if ( ! isset( $_POST['progetti_meta_box_nonce'] ) ) {
        return;
    }
    if ( ! wp_verify_nonce( $_POST['progetti_meta_box_nonce'], 'progetti_save_meta_box_data' ) ) {
        return;
    }
    // Evita autosave e revisioni
    if ( defined( 'DOING_AUTOSAVE' ) && DOING_AUTOSAVE ) {
        return;
    }
    if ( isset( $_POST['post_type'] ) && 'progetti' == $_POST['post_type'] ) {
        if ( ! current_user_can( 'edit_post', $post_id ) ) {
            return;
        }
    }
    // Salva il titolo e il contenuto personalizzato
    if ( isset( $_POST['progetto_titolo'] ) ) {
        $custom_title = wp_kses_post( $_POST['progetto_titolo'] );
        update_post_meta( $post_id, '_progetto_titolo', $custom_title );
    }
    if ( isset( $_POST['progetto_descrizione'] ) ) {
        update_post_meta( $post_id, '_progetto_descrizione', wp_kses_post( $_POST['progetto_descrizione'] ) );
    }
    // Salva la gallery, se presente
    if ( isset( $_POST['progetti_gallery'] ) ) {
        update_post_meta( $post_id, '_progetto_gallery', sanitize_text_field( $_POST['progetti_gallery'] ) );
    }
}
add_action( 'save_post', 'progetti_save_meta_box_data' );

I attempted several approaches to fix the issue. First, I updated the post title within the save_post hook using wp_update_post and temporarily removed the hook to avoid recursion. Next, I tried using the wp_insert_post_data filter to update the title before the post was saved. I also consolidated nonce usage across my meta boxes to prevent any conflicts. I expected that the gallery meta field—holding a comma-separated list of image attachment IDs—would be saved correctly and persist after the post update. However, after saving and refreshing the page, the gallery meta field is empty, causing the images to disappear.

Matching multiple occurances of a regex group with brackets rather than just the outside brackets [duplicate]

I’m making Regex patterns in JavaScript. Given the following string:

Filler [[ 2024-10-31 ]] at 08:53 {{{addProperty=time:(08:53)}}} more filler after {{{addProperty=tags:(one,two,three)}}}

I’d like to match and extract the two occurrences of {{{...}}} including the brackets in my results like so:

[
  " {{{addProperty=time:(08:53)}}}",
  " {{{addProperty=tags:(one,two,three)}}}"
]

My latest attempt: /s{{{(.*)}}}/g, matches one occurrence fine, but stumbles on two. Rather than finding the 2 occurrences separately and produces 2 matches, it just matches the outside brackets of both and returns it as one. So it finds this on my sample string:

 {{{addProperty=time:(08:53)}}} more filler after {{{addProperty=tags:(one,two,three)}}}

I’m relatively new to regex and don’t quite know how to approach this. I don’t need the JavaScript apparatus around the search, I’m OK with that, just help on the regex pattern itself. Thank you.

Edit: Clarification

  1. The the brackets will never be nested
  2. There will be a variety of content inside the brackets, but never more curly brackets. The examples given give a good sense of it.
  3. There could be none or many occurrences of {{{...}}} in the sample strings.

Server-side recording of Three.JS/WebGL animations as video file (mp4) [closed]

I’m working on a project where a user can upload any input video to my web app. My server-side application will process that input video, add some effects to it and make an output video file available to the user to download.

I’m planning to use Three.js/WebGL for the animation rendering. Is it possible to construct such a pipeline?

I’ve explored node-canvas, headless-gl, ccapture.js and all but most of them seem to either not fully support Three.js/WebGL rendering or not support server-side recording.

Similar older questions do not have an accepted answer but only some (possibly outdated) hints: Best way to record a HTML Canvas/WebGL animation server-side into a video?

Any ideas/hints? Or should I shift the video rendering to an entire different technology (Considering it’s 2D videos, ffmpeg? ffmpeg output doesn’t seem to have good effects)

javascript – how to convert date string zoned to utc date?

My date ref is zoned and I want add 1 minute and compare to now.

Sample:

08/01/2025 18:12:06 (Paris) + 1 minute = 08/01/2025 18:13:06 (Paris)

var mydate = "08/01/2025 18:12:06"
var dateString = mydate.split(' ')[0];
var timeString = mydate.split(' ')[1];
var date = new Date(dateString.split('/')[2], dateString.split('/')[1] - 1, dateString.split('/')[0], timeString.split(':')[0], timeString.split(':')[1]);

let endDate = moment(new Date(date.getTime())).add(1, 'minutes');
let now = moment(new Date().getTime());

if(now > endDate) {
  ...
}

my problem with endDate is init new Date with Paris string date, so 1 hour error.

UTC date to zoned date is easy but zoned string date to UTC is hard, please javascript developper!

Server-side recording of Three.JS/WebGL animations as video file (mp4)

I’m working on a project where a user can upload any input video to my web app. My server-side application will process that input video, add some effects to it and make an output video file available to the user to download.

I’m planning to use Three.js/WebGL for the animation rendering. Is it possible to construct such a pipeline?

I’ve explored node-canvas, headless-gl, ccapture.js and all but most of them seem to either not fully support Three.js/WebGL rendering or not support server-side recording.

Similar older questions do not have an accepted answer but only some (possibly outdated) hints: Best way to record a HTML Canvas/WebGL animation server-side into a video?

Any ideas/hints? Or should I shift the video rendering to an entire different technology (Considering it’s 2D videos, ffmpeg? ffmpeg output doesn’t seem to have good effects)

Capture Ultrasounds with Web Audio API frequency > 22kHz

I have to create a JavaScript code to connect to the microphone (even external ones) and render a spectogram in the ultrasounds range: from 22 kHz to 48 kHz (at least).

Apparently, there should be a bottleneck with the Web Audio API (AudioContext) as any frequency above the 22kHz is cut-off (see image below).

enter image description here

Please notice that:

  1. The browser grants the required sampling rate 96kHz.
  2. This cut-off happens when I use a ultrasounds microphone (connected via usb) and embedded microphone with proper sound quality setting (mine goes up to 96kHz).
  3. I tried to repliate the same code using the AnalyzerNode api and the problem is still there.

Any idea why this happens?

Here it is the code…

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
  <title>Improved Spectrogram with Extended Ultrasound Range</title>
  <style>
    body { font-family: Arial, sans-serif; text-align: center; }
    .controls { margin-bottom: 1em; }
    .container {
      display: flex;
      flex-direction: row;
      justify-content: center;
      align-items: flex-start;
      margin: 0 auto;
      position: relative;
      width: fit-content;
    }
    .axis-container {
      position: relative;
      width: 50px;
      height: 300px;
      margin-right: 10px;
    }
    .tick {
      position: absolute;
      white-space: nowrap;
      transform: translateY(-50%);
      font-size: 12px;
      right: 0;
    }
    canvas { border: 1px solid black; display: block; }
  </style>
</head>
<body>
  <h1>Improved Spectrogram<br>(Microphone Input with Extended Ultrasound Range)</h1>
  
  <div class="controls">
    <button onclick="startTest()">Start Spectrogram</button>
    <label for="scaleMode">Y‑Scale:</label>
    <select id="scaleMode">
      <option value="log" selected>Log</option>
      <option value="linear">Linear</option>
    </select>
  </div>
  
  <p id="info">
    Spectrogram visualizing the microphone audio signal with enhanced dynamic range and color mapping.
  </p>
  
  <div class="container">
    <div class="axis-container" id="axisContainer"></div>
    <canvas id="spectrogram" width="800" height="300"></canvas>
  </div>
  
  <script>
    // --------------------
    // Global Audio & FFT Parameters
    // --------------------
    let sampleRate = 96000;      // Requested sample rate (will be updated from AudioContext)
    const fftSize = 4096;        // FFT size (power of 2)
    const MIN_FREQ = 20;         // Lowest frequency to display (Hz)
    const NUM_TICKS = 8;         // Number of ticks on Y-axis
    let scaleMode = "log";       // "log" or "linear"
    
    let canvas, canvasCtx;
    
    // Microphone buffer & AudioContext variables:
    let micBuffer = new Float32Array(fftSize);
    let micBufferAvailable = false;
    let audioContext, scriptProcessor;
    
    // --------------------
    // FFT Implementation (Cooley-Tukey)
    // --------------------
    function bitReverse(x, bits) {
      let y = 0;
      for (let i = 0; i < bits; i++) {
        y = (y << 1) | (x & 1);
        x >>>= 1;
      }
      return y;
    }
    
    function fft(re, im) {
      const n = re.length;
      const levels = Math.log2(n);
      for (let i = 0; i < n; i++) {
        const j = bitReverse(i, levels);
        if (j > i) {
          [re[i], re[j]] = [re[j], re[i]];
          [im[i], im[j]] = [im[j], im[i]];
        }
      }
      for (let size = 2; size <= n; size *= 2) {
        const halfsize = size / 2;
        const tablestep = n / size;
        for (let i = 0; i < n; i += size) {
          for (let j = 0; j < halfsize; j++) {
            const k = j * tablestep;
            const angle = -2 * Math.PI * k / n;
            const cos = Math.cos(angle);
            const sin = Math.sin(angle);
            const tre = re[i+j+halfsize] * cos - im[i+j+halfsize] * sin;
            const tim = re[i+j+halfsize] * sin + im[i+j+halfsize] * cos;
            re[i+j+halfsize] = re[i+j] - tre;
            im[i+j+halfsize] = im[i+j] - tim;
            re[i+j] += tre;
            im[i+j] += tim;
          }
        }
      }
    }
    
    // --------------------
    // Frequency Mapping & Y-Axis Functions
    // --------------------
    function yToFreq(y, canvasHeight, minF, maxF) {
      const ratio = 1 - (y / canvasHeight);
      if (scaleMode === "linear") {
        return minF + ratio * (maxF - minF);
      } else {
        const minLog = Math.log10(minF);
        const maxLog = Math.log10(maxF);
        const freqLog = minLog + ratio * (maxLog - minLog);
        return Math.pow(10, freqLog);
      }
    }
    
    function freqToIndex(freq, maxFreq, bufferLength) {
      let idx = (freq / maxFreq) * (bufferLength - 1);
      return Math.floor(Math.min(bufferLength - 1, Math.max(0, idx)));
    }
    
    function freqToY(freq, minF, maxF, containerHeight) {
      let ratio;
      if (scaleMode === "linear") {
        ratio = (freq - minF) / (maxF - minF);
      } else {
        const minLog = Math.log10(minF);
        const maxLog = Math.log10(maxF);
        const freqLog = Math.log10(freq);
        ratio = (freqLog - minLog) / (maxLog - minLog);
      }
      ratio = Math.min(1, Math.max(0, ratio));
      return containerHeight * (1 - ratio);
    }
    
    function buildFreqTicks(numTicks, mode, minF, maxF) {
      const ticks = [];
      for (let i = 0; i <= numTicks; i++) {
        const ratio = i / numTicks;
        if (mode === "linear") {
          ticks.push(minF + ratio * (maxF - minF));
        } else {
          ticks.push(minF * Math.pow(maxF / minF, ratio));
        }
      }
      return ticks;
    }
    
    function drawYAxis() {
      const axisContainer = document.getElementById("axisContainer");
      axisContainer.innerHTML = "";
      const maxFreq = sampleRate / 2;
      const containerHeight = axisContainer.offsetHeight;
      const freqTicks = buildFreqTicks(NUM_TICKS, scaleMode, MIN_FREQ, maxFreq);
      freqTicks.forEach(freq => {
        const yPos = freqToY(freq, MIN_FREQ, maxFreq, containerHeight);
        const label = document.createElement("div");
        label.className = "tick";
        label.textContent = `${Math.round(freq)} Hz`;
        label.style.top = `${yPos}px`;
        axisContainer.appendChild(label);
      });
    }
    
    // --------------------
    // HSV to RGB conversion for color mapping.
    // --------------------
    function hsvToRgb(h, s, v) {
      let c = v * s;
      let x = c * (1 - Math.abs((h / 60) % 2 - 1));
      let m = v - c;
      let r, g, b;
      if (h < 60) {
        r = c; g = x; b = 0;
      } else if (h < 120) {
        r = x; g = c; b = 0;
      } else if (h < 180) {
        r = 0; g = c; b = x;
      } else if (h < 240) {
        r = 0; g = x; b = c;
      } else if (h < 300) {
        r = x; g = 0; b = c;
      } else {
        r = c; g = 0; b = x;
      }
      r = Math.floor((r + m) * 255);
      g = Math.floor((g + m) * 255);
      b = Math.floor((b + m) * 255);
      return `rgb(${r}, ${g}, ${b})`;
    }
    
    // Maps intensity (0-255) to a color (from blue to red).
    function intensityToColor(intensity) {
      // Map intensity to a hue between 240 (blue) and 0 (red)
      let hue = 240 * (255 - intensity) / 255;
      return hsvToRgb(hue, 1, 1);
    }
    
    // --------------------
    // Spectrogram Setup
    // --------------------
    function setupSpectrogram() {
      canvas = document.getElementById("spectrogram");
      canvasCtx = canvas.getContext("2d");
      canvasCtx.clearRect(0, 0, canvas.width, canvas.height);
      const scaleSelect = document.getElementById("scaleMode");
      scaleSelect.onchange = () => {
        scaleMode = scaleSelect.value;
        drawYAxis();
      };
      drawYAxis();
    }
    
    // --------------------
    // Start Microphone Capture and Animation
    // --------------------
    async function startMic() {
      try {
        // Request a 96 kHz sample rate (if supported by the browser/hardware)
        audioContext = new (window.AudioContext || window.webkitAudioContext)({sampleRate: 96000});
        console.log("AudioContext sampleRate:", audioContext.sampleRate);
        sampleRate = audioContext.sampleRate;  // update sample rate
        const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false });
        const source = audioContext.createMediaStreamSource(stream);
        // Create a ScriptProcessorNode with a buffer size equal to fftSize.
        scriptProcessor = audioContext.createScriptProcessor(fftSize, 1, 1);
        scriptProcessor.onaudioprocess = function(audioProcessingEvent) {
          let inputBuffer = audioProcessingEvent.inputBuffer;
          let channelData = inputBuffer.getChannelData(0);
          micBuffer.set(channelData);
          micBufferAvailable = true;
        };
        source.connect(scriptProcessor);
        scriptProcessor.connect(audioContext.destination);
        // Start the animation loop.
        animate();
      } catch (e) {
        alert('Error accessing microphone: ' + e);
      }
    }
    
    function startTest() {
      setupSpectrogram();
      startMic();
    }
    
    // --------------------
    // Spectrogram Animation Using Microphone Data
    // --------------------
    function animate() {
      requestAnimationFrame(animate);
      
      if (!micBufferAvailable) return; // Wait for microphone data
      
      // Copy the microphone buffer for processing.
      let signal = new Float32Array(micBuffer);
      micBufferAvailable = false;
      
      // Apply a Hamming window to reduce spectral leakage.
      for (let i = 0; i < fftSize; i++) {
        let windowVal = 0.54 - 0.46 * Math.cos(2 * Math.PI * i / (fftSize - 1));
        signal[i] *= windowVal;
      }
      
      // Prepare arrays for FFT.
      let re = new Float32Array(signal);
      let im = new Float32Array(fftSize); // automatically zeroed
      
      fft(re, im);
      
      // Compute magnitudes for the first half (Nyquist).
      const halfSize = fftSize / 2;
      let magnitudes = new Float32Array(halfSize);
      for (let i = 0; i < halfSize; i++) {
        magnitudes[i] = Math.sqrt(re[i] * re[i] + im[i] * im[i]);
      }
      
      // Convert magnitudes to decibels.
      let dBArray = new Float32Array(halfSize);
      for (let i = 0; i < halfSize; i++) {
        dBArray[i] = 20 * Math.log10(magnitudes[i] + 1e-10);
      }
      
      // Use a fixed dynamic range (-80 dB to 0 dB) for mapping.
      const minDB = -80, maxDB = 0;
      let dataArray = new Uint8Array(halfSize);
      for (let i = 0; i < halfSize; i++) {
        let intensity = ((dBArray[i] - minDB) / (maxDB - minDB)) * 255;
        intensity = Math.max(0, Math.min(255, intensity));
        dataArray[i] = intensity;
      }
      
      // Shift the canvas left by 1 pixel.
      const width = canvas.width, height = canvas.height;
      const oldImage = canvasCtx.getImageData(1, 0, width - 1, height);
      canvasCtx.putImageData(oldImage, 0, 0);
      
      // Draw a new column on the right using the FFT data.
      const maxFreq = sampleRate / 2;
      for (let y = 0; y < height; y++) {
        const freq = yToFreq(y, height, MIN_FREQ, maxFreq);
        const index = freqToIndex(freq, maxFreq, halfSize);
        const intensity = dataArray[index];
        canvasCtx.fillStyle = intensityToColor(intensity);
        canvasCtx.fillRect(width - 1, y, 1, 1);
      }
    }
  </script>
</body>
</html>

If I generate a digital signal via JavaScript instead of connecting to the microphone via Web Audio API the code (the FFT) renders the spectogram properly, thus I think I suspect there should be some filter with at the AudioContext level.

Alternatives I’m exploring are webUSB api to have a direct connection with the mic’s raw data, but commercial microphones are protected by a class which cannot be accessed via this API.

Any alternative solution to display ultraounds spectograms in the browser?

Optional props functions in React hook

I’m trying to create a hook that has optional function props. The idea is that I could pass functions that could be called before and after a request is made.

export function useData({ onRequest = () => null, onSuccess = () => null, onError = () => null}) {
  const [currentData, setCurrentData] = useState()
  const [contextState, dispatch] = useContextState() // functions and data from a context provider

  const performRequest = useCallback((something) => {
    onRequest() // optional pre-request function call
    getData()
      .then((data) => {
        setCurrentData(data)
        onSuccess() // optional post-request function call
      )}
      .error((err) => {
        onError(err) // optional custom error logic
      )}
    ),
    [onRequest, getData, onSuccess, onError]
  }

  useEffect(() => {
    dispatch({ data: currentData })
  }, [currentData, dispatch])


  return { performRequest }
}

If I have an element on the screen that shows the state of the request, for example, I can use the custom functions to update this. In other places, though, I could leave them out when I don’t need them. When I’ve tried this and similar solutions, I get `Cannot read properties of undefined (reading ‘onError’) even though it shows as a function in the VSCode debugger. Not sure If there’s just something I’m missing, or if there’s a fundamental problem with what I’m trying to do.

How to control gap between slices and line path using wheelnavjs

I am trying to alter two things:

  • The gap between all the individual slices. Right now the default is quite big for what I need.
  • The gap between the line path and the slices. I wish for the line path to touch the slices.

I took the example from the wheelnavjs documentation

wheel = new wheelnav('wheelDiv');
wheel.wheelRadius = 130;
wheel.maxPercent = 1.2;
wheel.colors = colorpalette.oceanfive;
wheel.clickModeRotate = false;
wheel.slicePathFunction = slicePath().WheelSlice;
wheel.navAngle = 30;
wheel.createWheel(['basic', 'hover', 'select', null, null, null]);

and slicePath().WheelSlice seems to be adding the line path I need as well as some distance between slices. However how can I control the gap?
I tried wheelRadius, maxPercent, setting the magin and padding for both the slices and lines. Nothing has worked so far.

Tried to illustrate what gap I’m talking about.

enter image description here

how do I make a pdf turn into button on a small screen?

I am trying to have a pdf display on the screen to view but turn into a button when viewed on mobile.

My code doesn’t turn the pdf into a button, just a smaller view of the pdf. I would like it to be a button that pulls up on a separate page. On the mobile view its a tiny box and not a button.

here is my code:

html

<body>
    <h1 class="about-h1">Overview Of Me!</h1>
    <div class="container-about">
      <div class="resume">
        <object class="pdf" data="pictures/Resume2025.pdf"></object>
      </div>
    </div>

    <script
      src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"
      integrity="sha384-YvpcrYf0tY3lHB60NNkmXc5s9fDVZLESaAA55NDzOxhy9GkcIdslK1eN7N6jIeHz"
      crossorigin="anonymous"
      src="index.js"
    ></script>

Js

function makeResumeBtn() {
  const button = document.createElement("button");
  button.id = "resumeBtn";
  button.textContent = "Resume";

  const container = document.getElementsByClassName(".container-about");
  container.appendChild(button);
}

let widthMatch = window.matchMedia("(max-width: 767px)");

widthMatch.addEventListener("change", function (makeResumeBtn) {
  if (makeResumeBtn.matches) {
    $(".pdf").toggleId("resumeBtn");
  } else {
  }
});

css

.container-about {
  text-align: center;
}

.resume {
  width: 800px;
  margin: 0 auto;
}

.pdf {
  width: 800px;
  height: 500px;
}

@media screen and (max-width: 767px) {
 .resume,
  .pdf {
    width: auto;
    height: 50px;
  }
}

Can you provide a project module import path to be resolved in a consumed npm library?

I’m currently authoring a UI project as well as an npm package that will be used by this project. My use-case of integrating the two includes providing a path to a UI component in this project (e.g src/components/Comp1/Comp1_1.tsx) and for the npm package to resolve that path and import that component.

Is that something that’s possible? Or do npm packages can’t have context of the modules/files you have authored in your project folder?

What is the best AI tool for coding? [closed]

I’m looking for the best AI tool to assist with generating Python scripts for data analysis or debugging Java code. I’ve tried ChatGPT and GitHub Copilot, but I found that ChatGPT struggled with complex logic, and Copilot’s suggestions were often irrelevant. My main goal is to improve code efficiency, generate complete functions, and get real-time debugging assistance. Which AI tool would be the best fit for my needs?

How can JavaScript be used to write over buttons in a Python web app? [closed]

I have a Python web app that is a knowledge database query bot. It has a knowledge database the user asks a question and the bot answers. When the answer is given the user is given the choice to give a 1-3 star rating. Looks like this:

This is bot answer

|Give rating|* | **| ***| 

Where || are different buttons the first one not clickable the others are with the result from the click stored in a database via a custom pipeline. Is it possible for a JS program to be written so when the user hovers over the stars the stars to visualize as 3 empty (grey) stars and via their mouse position for them to choose one two or three stars that get painted in yellow while those not chosen are left gray and this to have the exact same function as clicking on one of the three stars as it was before. So if the user hovers over lets say the second one the first and second get painted he clicks and this is equivalent as him clicking **. Ideally nothing to be change with DB storing pipeline.

Tried to write a program that looks the HTML over and if found a star redirects to another visualization program that I have no idea how to write and am hoping that has been done before. Thanks!