Capturing Google Form data as it gets filled

Question: How to capture google form data entered in any section immediately upon clicking the Next button to move on to the next section (before submitting the form)?

Specifically here I wish to capture the email ID filled in before submission and use it in the next section to send a one-time password (OTP) to the email id and allow the respondent to fill it in the OTP field for validation.

I tried to capture the contents of the first section on the apps script attached to the form, the moment I click ‘Next’ to go the next section. However I have only two event types I can set for any trigger – onOpen and onSubmit. As it is there is no onNext or other such event which captures intermediary button clicks. As far as I researched why this is so, it is found that the Submit button takes the data to the server side, whereas the Next button is purely a client-side operation to enable going back and forth before submission, so no data can be read on the script while the respondent fills out each section. So as things stand google apps script is not a solution really as far as I can see. Is that correct? Could there be another fix or workaround with or without the Apps Script to capture data as we fill sections?

There are a lot of chrome plug-ins that can do start-to-finish OTP verifications, but I really do not want such plug-ins as they cost money. A technical fix would be fine, even a web app. Any sources I can get started with to build such an app would be appreciated.

I get ReferenceError: $ is not defined after hard refresh

when i open my page from sidebar there is no problem but after hard refresh my page, i get ,

ReferenceError: $ is not defined DataTableComponent.useEffect

I am loading my scripts with ScriptsLoader component in layout.tsx like ;

<body className={}}>
        <div >
                 ...
        </div>
        <ScriptsLoader />
      </body>

I used the following debug in the component used by the page I got the problem ;

useEffect(() => {
    if (typeof window !== "undefined" && window.$) {
      console.log("jQuery is available!");
      window.$("#someElement").fadeIn();
    } else {
      console.warn("jQuery is not available yet.");
    }
  }, []);

i get this in the console after hard refresh.

jQuery is not available yet.

How can i solve this problem? Should i change my script loading method?

Export function from Svelte component

Component.svelte:

<script>
export function test(){
    return "test";
}
</script>

App.svelte:

<script>
    import { test } from "./Component.svelte" // error;
     
    let testValue = $state(test())
</script>

<h1>{testValue}</h1>

Is there a way to export a function from the component? But without actually mounting the component

Or is there some way to replicate vuejs portal, but without mounting the full component?

Changing the form in a modal window when clicking different buttons

Here is the code for the modal window

<div class="modal fade" id="userModal" tabindex="-1" aria-labelledby="userModalLabel" aria-hidden="true" th:fragment="userModal">
    <div class="modal-dialog">
        <div class="modal-content">
            <div class="modal-header">
                <!-- Заголовок меняется в зависимости от операции -->
                <h5 class="modal-title" id="userModalLabel"></h5>
                <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
            </div>
            <div class="modal-body">
                <form th:object="${user}" method="post">
                    <input type="hidden" th:name="${_csrf.parameterName}" th:value="${_csrf.token}" />

                    ... another code!!!

                
                    <footer class="footer">
                            <button type="button" class="btn btn-secondary btn-sm" data-bs-dismiss="modal">Close</button>
                            <button type="submit" class="btn btn-primary btn-sm">Edit</button>
                            <button type="submit" class="btn btn-danger btn-sm">Delete</button>
                    </footer>
                </form>
            </div>
        </div>
    </div>
</div>

Here is the code of the script that fills the window with data

<script>
    var saveUserUrl = '[[@{/saveUser}]]';
    var deleteUserUrl = '[[@{/deleteUser}]]';

    $('#userModal').on('show.bs.modal', function (event) {
        var button = $(event.relatedTarget);
        var userId = button.data('user-id');
        var operation = button.data('operation');

        var modal = $(this);
        modal.find('.modal-body #userId').val(userId);

        ... another code!!!

        var form = modal.find('.modal-body form');
        if (operation === 'edit') {
            form.attr('action', saveUserUrl);
        } else {
            //  Добавляем userId к URL для удаления
            form.attr('action', deleteUserUrl);
        }

        ... another code!!!

        var editButton = modal.find('.modal-body .btn-primary');
        var deleteButton = modal.find('.modal-body .btn-danger');

        if (operation === 'edit') {
            editButton.show();
            deleteButton.hide();
        } else {
            editButton.hide();
            deleteButton.show();
        }

        ... another code!!!
    });

</script>

this is how the fragment with the modal window is inserted in the main html file

<div th:replace="~{user_modal :: userModal}"></div>

this method works when you click on the Edit button in the modal window

@PostMapping("/saveUser")
public String saveUser(@ModelAttribute("user") User user,
                       @RequestParam(value = "roles", required = false) List<Long> roleIds,
                       @RequestParam(value = "newPassword", required = false) String newPassword) {

    ... another code!!!
}

and this method doesn’t even go when you click the Delete button in the modal window

@PostMapping("/deleteUser")
public String deleteUser(User user) {
    //TODO: println
    System.out.println("deleteUser: " + user.getId());
    userService.delete(user.getId());

    return "redirect:/admin-bootstrap";
}

although the HTTP request is displayed in the browser line http://localhost:8088/deleteUser

all necessary views are there

I tried passing userId as part of HTTP request http://localhost:8088/deleteUser?userId=100 and changed controller method like this

@PostMapping("/deleteUser")
    public String deleteUser(@RequestParam("userId") Long id) {
        System.out.println("deleteUser: " + id);
        userService.delete(id);

        return "redirect:/admin-bootstrap";
    }

this didn’t work either, println() wasn’t output to the console.

I expect the following functionality: there is a page with a table of users, opposite each row with user data there are two buttons (Edit and Delete), when you click on Edit, a modal window opens in which the input fields are displayed, they can be changed (change the user name, password, email address) and when you click the Edit button, the user data in the database is rewritten inside the modal window – this functionality works! But when you click the Delete button, a modal window also opens, but with blocked input fields, a changed title (“Edit user” is replaced by “Delete user”), the Edit button in the modal window is hidden and the Delete button is shown and when you click on it, the corresponding controller method should work, but it does not even go into it!

Why choose B.Tech in artificial intelligence and digital systems?

In today’s data-driven world, artificial intelligence (AI) and digital systems are at the forefront of technological advancements, revolutionizing industries and reshaping the global economy. The B.Tech in artificial intelligence and digital systems program is designed to empower students with cutting-edge knowledge and practical skills to drive decision-making and automation using AI and data analytics. From healthcare and finance to retail and entertainment, the demand for AI and data science professionals continues to surge as organizations rely on insights derived from data for strategic growth.
The undergraduate course B. Tech in artificial intelligence (AI) and digital systems covers the theoretical foundations and practical applications of AI, data science, and related fields. This course combines core principles of computer science, AI algorithms, and big data analytics, equipping students to design intelligent systems that analyze vast data sets, predict trends, and automate processes. Graduates of artificial intelligence and digital systems program will emerge as skilled professionals, ready to take on roles in AI development, machine learning engineering, data science, and business intelligence.

This B.Tech in AI and digital systems program is ideal for:
Students passionate about AI, machine learning, and data science.
Individuals interested in creating smart AI solutions using data-driven technologies.
Aspiring professionals eager to develop predictive artificial intelligence models, automate tasks, and solve complex problems.
Those with a strong interest in artificial intelligence, mathematics, programming, and analytical thinking.
If you’re ready to explore the evolving fields of AI and data science and want to make a real-world impact with data-driven innovations, a B.Tech artificial intelligence and digital systems course will provide the foundation for success.

My pixelate function is not working in javascript

Libraries used: p5.min.js

I am creating a function that takes an image as a parameter, splits it into a 5×5 grid, calculates the average pixel intensity in each box, and then sets every pixel in that box to the average intensity.

This basically just pixelates the image in a very specific way and I have been told to do it in this method.

Steps of the Code

  1. Face detection on a snapshot of a webcam (Works perfectly)

1. Face detection on a snapshot of a webcam (Works perfectly)

  1. Creates an image of the face detection region (Works perfectly)

2. Creates an image of the face detection region (Works perfectly)

  1. Splits the face image into a 5×5 grid (Works perfectly)

    (the grid is spread out just to see each individual block easier)

3. Splits the face image into a 5x5 grid (Works perfectly)

  1. Each block is sent individually to a function called “getPixelAvgIntensity”, which calculates the average pixel intensity in the image it is shown, here is the function:
//This calculates the average pixel density in an image, this part seems to work as expected
function getPixelAvgIntensity(img, ix, iy){
    img.loadPixels();
    var intensity = [0, 0, 0, 0];
    var pixelCount = 0;
    
    //Ignore this, just shows me if its getting the correct blocks
    image(img, 600 + ix * 3, 600 + iy * 3);
    console.log("x: ", img.width, ", y: ", img.height);
    
    //loop through the image, adding each pixel's intensity to the total. The pixel intensity is stored as an array using [R, G, B, A]
    for (var x = 0; x < img.width; x++){
        for (var y = 0; y < img.height; y++){
            //store how many pixels have been used
            pixelCount++;
            
            //Loop through the rgba array to add each value
            for (var index = 0; index < intensity.length; index++){
                intensity[index] += Math.floor(img.get(x, y)[index]);
            }
        }
    }
    
    //calculate the average using the amount of pixels used
    for (var index = 0; index < intensity.length; index++){
        intensity[index] = Math.floor(intensity[index] / pixelCount);
    }
    
    //debugging
    console.log(intensity);
    
    //Returns the [R, G, B, A] value of the average intensity
    return intensity;
}
  1. The code then sets each pixel in each block to the corresponding average intensity, but when I print it out it shows like this: (The background is white, im not sure why it turned black but that’s irrelevant)

enter image description here

It only comes up as a thin bar and I’m not sure why.

Here is the main function which takes in an image as a parameter:

function pixelate(imgIn){
    var imgOut = createImage(imgIn.width, imgIn.height);
    
    //load pixels to use
    imgOut.loadPixels();
    imgIn.loadPixels();
    
    //if this isnt here the program breaks
    var avgIntensity = [0, 0, 0, 0];
    
    //just debugging, ignore this
    //var block = 0;
    
    //loop through x coords of pixels incrementing by a 5th of the image size, so i am left with a 5x5 grid
    for (var x = 0; x < Math.floor(imgIn.width) - Math.floor(imgIn.width / 5); x += Math.floor(imgIn.width / 5)){
        //loop on y coords and do the same thing as x
        for (var y = 0; y < Math.floor(imgIn.height) - Math.floor(imgIn.height / 5); y += Math.floor(imgIn.height / 5)){
            
            //call average intensity function, with the starting coords of the block, and the width and height of the area to send. Sends as an image
            avgIntensity = getPixelAvgIntensity(imgIn.get(x, y, imgIn.width / 5, imgIn.height / 5), x, y);
            
            //just debugging
            //console.log(x, "   ", y, "  ", x + imgIn.width / 5, "  ", y + imgIn.height / 5);
            
            //loop through the block and set every pixel of the current block in the  image to send out to the average pixel intensity
            for (var i = 0; i <= Math.floor(imgIn.width / 5); i++){
                for (var j = 0; j <= Math.floor(imgIn.height / 5); j++){
                    
                    //I know this is messy but I have been trying everything
                    imgOut.set(Math.floor(i + x), Math.floor(j + y), avgIntensity);
                    //console.log("i  ", i + x, "  j  ", j + y);
                }
            }
            //block++;
            //console.log("Block ", block, " is complete");
        }
    
    }
    
    imgOut.updatePixels();
    
    //This should return the pixelated image
    return imgOut;
}

I have been trying to fix this thing for hours and cannot see why it would be doing this, If you want to test it, you just need to call the “pixelate” function with an image as the parameter, and make sure you have p5.min.js in your library, it should not require anything else.

You will need to run it in an IDE since I dont think you can parse an image on here.

PS I am new to this site, I only posted here twice so I’m not familiar with all the little things I need to do, I am also very pressed for time as I am in Uni and this is for an assignment.

EDIT: I was asked for the setup and draw functions with example data, the data is the webcam, and when a key is pressed, it captures an image from the webcam and uses that to perform all these things on

function setup(){
    createCanvas(4000, 4000);
    pixelDensity(1);
    
    //used for detecting the face, uses a different library, but not relevant to problem
    detector = new objectdetect.detector(160, 120, 1.2, classifier);
    
    //Just shows webcam
    webcam = createCapture(VIDEO);
    webcam.hide();
    noStroke();
}

function draw(){
    background(255);
    
    //If the snapshot has been taken
    if (picture){
        detectFace(picture);
        
        noLoop();
    }
    else{image(webcam, 100, -60);}
}

//Take snapshot of webcam on key press
function keyPressed(){
    picture = createImage(160, 120);
    picture.copy(webcam, 0, 0, webcam.width, webcam.height, 0, 0, 160, 120);
    console.log("Captured Screenshot");
}


//Face detection (Im not sure exatly how all this works, but I have an idea, our professor gave this piece to us)
function detectFace(imgIn){
    var imgOut = createImage(160, 120);
    imgOut.copy(imgIn, 0, 0, imgIn.width, imgIn.height, 0, 0, 160, 120);
    image(imgOut, 0, 480);
    
    var faceImg;

    imgOut.loadPixels();
    
    //Creates an array of coords where faces have been detected
    faces = detector.detect(imgOut.canvas);
    
    strokeWeight(2);
    stroke(0);
    noFill();
    
    //This draws the box around the face
    if (faces){
        for (var i = faces.length - 1; i < faces.length; i++){
            var face = faces[i];
            //Uses the coords
            rect(face[0] + 0, face[1] + 480, face[2] + 0, face[3]) + 480;
            faceImg = imgOut.get(face[0], face[1], face[2], face[3]);
        }
    }
    image(faceImg, 400, 480);
    
    //This is where the problematic code starts
    image(pixelate(faceImg), 600, 480);
}

array?.map (if array is undefined)

const arr = [
  {
    component_name:'apple',
    component_part: [
      {
         part_no: 1,
         part_name: 'xxx'
      },
      {
         part_no: 2,
         part_name: 'yyy'
      }
    ]
  },
  {
    component_name:'grape'
  }
]

for (const row of arr) {
  row.component_part.map((row2) => {console.log(row2)});
}

Second row.component_part is undefined. so error occured…

for (const row of arr) {
  if (row.component_part !== undefined) {
    row.component_part.map((row2) => {console.log(row2)});
  }
}

solved..
But I don’t want to use [if].

ex) row.component_part?.map((row2) => {console.log(row2)});

=> error

=> TypeError: undefined is not iterable (cannot read property Symbol(Symbol.iterator))

thanks you for reading and sorry about my poor english.

await Promise.all(
  for (const row of arr) {
    row.component_part?.map((row2) => {console.log(row2)});
  }
)

Promise.all => error … thanks

Original Source..

await Promise.all(
  row.purchase_property_part?.map(async (value): Promise<void> => {
    const part_data: ComponentPartModelImpl = await ComponentPartRepository.findOneComponentPart(user, row.component_no, value.component_part_no);
    const part_response: ComponentResponseDto = ComponentResponseDto.buildFromAttributes(part_data);
    value.component_name = part_response.component_name;
    value.category_no_item = part_response.category_no_item;
    await ConvertName.attachCategoryName<ComponentResponseDto>(user, value, CategoryKindsRole.ITEM, AttachCategoryRole.NAME);
    value.price = part_response.price;
  })
)

if no [Promise.all], more data is not attached.

How to read .npz file in Javascript?

I’m trying to follow a tutorial on YT on how to write a basic neural network, but it’s written in python. I’m trying to write everything using javascript and I haven’t found a way to read the npz file into JS.

I have tried npyjs and tfjs-npy-node, but neither works. Using npy.js yields the error:

Only absolute URLs are supported

Using tfjs-npy-node yields any one of these errors, depending on how I pass it the file:

Expected provided filepath () to have file extension .npy

Error: assert failed

Error: Not a numpy file

Are there any functional libraries that work in Node and will read/parse the file without me needing to create an entire project dedicated to it? (looking at you, tensorflow)

Mpegts js avc1.640028 codec problem with BT.709

I am using mpegts js library for live streaming. We aim to play the live stream urls (.ts format) sent by our users.

However, we are having some problems with the avc1.640028 codec. When I checked it in VLC, we noticed that the avc1.640028 codecs that are not working have BT.709 color space

Also the values in mpegjs ts are as follows;

Working;

[TSDemuxer] > Parsed first PMT: {"pid_stream_type":{"256":27,"257":3,"258":6},"common_pids":{"h264":256,"mp3":257},"pes_private_data_pids":{"258":true},"timed_id3_pids":{},"synchronous_klv_pids":{},"asynchronous_klv_pids":{},"scte_35_pids":{},"smpte2038_pids":{},"program_number":1,"version_number":0,"pcr_pid":256}

Not Working;

[TSDemuxer] > Parsed first PMT: {"pid_stream_type":{"256":27,"257":3},"common_pids":{"h264":256,"mp3":257},"pes_private_data_pids":{},"timed_id3_pids":{},"synchronous_klv_pids":{},"asynchronous_klv_pids":{},"scte_35_pids":{},"smpte2038_pids":{},"program_number":1,"version_number":0,"pcr_pid":256}

The only difference between the two is the value 258:6

I can run it in Firefox with the same code, but not in Chrome.

I am not familiar with codecs, remux and demux so I don’t know where to start

Expo App crashes in production Build on TestFlight

I have a React-Native Expo iOS app that worked just fine until now. When I run the build with expo run:ios it works perfectly on the simulator but when I build it for production it immediately crashes when I open it.

I get the following error log in my xcode console.

Invariant Violation: Failed to call into JavaScript module method RCTDeviceEventEmitter.emit(). Module has not been registered as callable. Bridgeless Mode: false. Registered callable JavaScript modules (n = 0): .
          A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native., js engine: hermes
Unhandled JS Exception: Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Bridgeless Mode: false. Registered callable JavaScript modules (n = 0): .
          A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native., js engine: hermes
*** Terminating app due to uncaught exception 'RCTFatalException: Unhandled JS Exception: Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Bridgeless Mode: false. Registered callable JavaScript modules (n = 0): .
          A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native., js engine: hermes', reason: 'Unhandled JS Exception: Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Bridgeless ..., stack:
invariant@1965:25
__callFunction@28223:21
anonymous@28036:30
__guard@28162:14
callFunctionReturnFlushedQueue@28035:20
'
*** First throw call stack:
(0x1833425fc 0x1808bd244 0x1026faeec 0x10239b15c 0x10239b984 0x183344e34 0x183343e7c 0x1833a7a38 0x10272fc90 0x102731da4 0x1027319f8<…>

I tried really everything already from updating all my possible dependencies, prompting every possible AI tool for solutions and so on. Nothing helped so far. My entry file path etc. is also correct as I checked it like 10000x times.

Now here comes the tricky most confusing part:
I cloned my repo again and went back to the last successful commit. I updated the version numbers to the current one, build it and deploy it on TestFlight -> The build works and I can use the app. When I copy the exact same directory of the working codebase and paste it into my actual repository on the latest commit and build it again (NOT A SINGLE LINE OF CODE CHANGED) it crashes. So I dont really know how this can happen that it works on the old commit but crashes on my new commit.

I already tried to remove my whole repository from my computer and clone it somewhere else again because I thought it may has something to do with caching but it also doesnt work.

I tried all the solutions found on GitHub issues or StackOverflow, but none of them worked.

This, This, This and so on.

Further Information:

I also thought is has something to do with react-native-reanimated or react-native-gesture-handler versioning but 1. they didnt change since my last successful build, 2. the installed versions are compatible with my react-native version (0.74.5) expo ([email protected]).

I ran expo-doctor a million times.

I tried to build it without my custom entry point (index.js) using the default one provided by expo.

I use @react-native-firebase and updated those dependencies to the most compatible versions.

The xcode logs before the crash don’t give any valuable information or errors at all.

Obviously I cleaned the xcode builds and cached data with xcode, ran expo prebuild --clean & pod install etc.

Call Method Using DOM From Custom Object

I’m trying to advance my understanding of JavaScript. I am using a plugin that includes a custom object that exposes a method that I can use to complete a task.

I noticed that it is using “dom” with that object as follows

ThePlugInObject.dom.TheMethod()

However, I thought methods were called from Class objects directly, without the use of “dom’.

ThePlugInObject.TheMethod()

Can someone explain the usage of “dom” in the first example? I’ve searched and cannot find any reference to learn from.

When using ‘interrupt’ followed by ‘new Command({ resume: …})’, get an undefined message error from LangChain + LangGraph

When I invoke a graph that includes interrupts in one of its nodes, it seems to get into an invalid/ unrecoverable state, with the following error:

        return chatGeneration.message;
                              ^

TypeError: Cannot read properties of undefined (reading 'message')
    at ChatOpenAI.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/language_models/chat_models.js:64:31)

The first encounter of the interrupt appears to go well, but after the second encounter this occurs.

(I include the minimal code to reproduce this in full below the question text.)

The main logic is in approveNode, which contains the interrupt.

  • If the user responds with y, it proceeds to the toolsNode which the agentNode requested.
  • If the user responds with anything else (e.g. n), it proceeds to END

The issue is that once it proceeds to END,
the subsequent call to graph.invoke results in this error.

Another thing that I have tried is to change the logic in approveNode such that:

  • If the user responds with y, it proceeds to the toolsNode which the agentNode requested. (same as before)
  • If the user responds with anything else (e.g. n), it proceeds to back to agentNode. (this has changed)

(and I change the main graph accordingly to reflect this updated flow)

However, this results in the same error as above, just that it happens after the first interrupt instead of the second interrupt.

Questions:

  • Is the workflow that I have defined valid? Is there a better way to structure it?
  • Otherwise, how can I implement this such that I get a simple approve/ deny for tool calls going on?

References used:


Main graph:

const workflow = new StateGraph(MessagesAnnotation)
  .addNode('agent', agentNode)
  .addNode('tools', toolsNode)
  .addNode('approve', approveNode, {
    ends: ['tools', END],
  })
  .addEdge(START, 'agent')
  .addEdge('tools', 'agent')
  .addConditionalEdges('agent', agentRouter, ['approve', END]);
const checkpointer = new MemorySaver();
const graph = workflow.compile({
  checkpointer,
});
const graphConfig = {
  configurable: { thread_id: '0x0004' },
};

Tools, nodes, and routers:

const cmdFooTool = tool(async function(inputs) {
  console.log('===TOOL CMD_FOO===');
  return inputs.name;
}, {
  name: 'CMD_FOO',
  description: 'Invoke when you want to do a Foo.',
  schema: z.object({
    name: z.string('Any string'),
  }),
});
const cmdBarTool = tool(async function(inputs) {
  console.log('===TOOL QRY_BAR===');
  return inputs.name;
}, {
  name: 'QRY_BAR',
  description: 'Invoke when you want to query a Bar.',
  schema: z.object({
    name: z.string('Any string'),
  }),
});
const tools = [cmdFooTool, cmdBarTool];
const llmWithTools = llm.bindTools(tools);

const toolsNode = new ToolNode(tools);

async function agentNode(state) {
  console.log('===AGENT NODE===');
  const response = await llmWithTools.invoke(state.messages);
  console.log('=RESPONSE=',
    'ncontent:', response.content,
    'ntool_calls:', response.tool_calls.map((toolCall) => (toolCall.name)));
  return { messages: [response] };
}

async function approveNode (state) {
  console.log('===APPROVE NODE===');
  const lastMsg = state.messages.at(-1);
  const toolCall = lastMsg.tool_calls.at(-1);

  const interruptMessage = `Please review the following tool invocation:
${toolCall.name} with inputs ${JSON.stringify(toolCall.args, undefined, 2)}
Do you approve (y/N)`;

  console.log('=INTERRUPT PRE=');
  const interruptResponse = interrupt(interruptMessage);
  console.log('=INTERRUPT POST=');

  const isApproved = (interruptResponse.trim().charAt(0).toLowerCase() === 'y');
  const goto = (isApproved) ? 'tools' : END;
  console.log('=RESULT=n', { isApproved, goto });
  return new Command({ goto });
}

function hasToolCalls(message) {
  return message?.tool_calls?.length > 0;
}

async function agentRouter (state) {
  const lastMsg = state.messages.at(-1);
  if (hasToolCalls(lastMsg)) {
    return 'approve';
  }
  return END;
}

Simulate a run:

let state;
let agentResult;
let inputText;
let invokeWith;

// step 1: prompt
inputText = 'Pls perform a Foo with name "ASDF".';
console.log('===HUMAN PROMPT===n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);

state = await graph.getState(graphConfig);
console.log('===STATE NEXT===n', state.next);
console.log('=LAST MSG=n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=n', agentResult.messages.at(-1).tool_calls);

// step 2: interrupted in the 'approve' node, human in the loop authorises
inputText = 'yes'
console.log('===HUMAN INTERRUPT RESPONSE===n', inputText);
invokeWith = new Command({ resume: inputText });
agentResult = await graph.invoke(invokeWith, graphConfig);

state = await graph.getState(graphConfig);
console.log('===STATE NEXT===n', state.next);
console.log('=LAST MSG=n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=n', agentResult.messages.at(-1).tool_calls);

// step 3: prompt
inputText = 'Pls perform a Foo with name "ZXCV".';
console.log('===HUMAN PROMPT===n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);

state = await graph.getState(graphConfig);
console.log('===STATE NEXT===n', state.next);
console.log('=LAST MSG=n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=n', agentResult.messages.at(-1).tool_calls);

// step 4: interrupted in the 'approve' node, human in the loop does not authorise
inputText = 'no';
console.log('===HUMAN INTERRUPT RESPONSE===n', inputText);
invokeWith = new Command({ resume: inputText });
agentResult = await graph.invoke(invokeWith, graphConfig);

state = await graph.getState(graphConfig);
console.log('===STATE NEXT===n', state.next);
console.log('=LAST MSG=n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=n', agentResult.messages.at(-1).tool_calls);

// step 5: prompt
inputText = 'Pls perform a Foo with name "GHJK".';
console.log('===HUMAN PROMPT===n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);

state = await graph.getState(graphConfig);
console.log('===STATE NEXT===n', state.next);
console.log('=LAST MSG=n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=n', agentResult.messages.at(-1).tool_calls);

Full output:

===HUMAN PROMPT===
 Pls perform a Foo with name "ASDF".
===AGENT NODE===
(node:58990) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
=RESPONSE= 
content:  
tool_calls: [ 'CMD_FOO' ]
===APPROVE NODE===
=INTERRUPT PRE=
===STATE NEXT===
 [ 'approve' ]
=LAST MSG=
 
=LAST TOOL CALLS=
 [
  {
    name: 'CMD_FOO',
    args: { name: 'ASDF' },
    type: 'tool_call',
    id: 'call_u7CIyWdTesFATZ5bGG2uaVUZ'
  }
]
===HUMAN INTERRUPT RESPONSE===
 yes
===APPROVE NODE===
=INTERRUPT PRE=
=INTERRUPT POST=
=RESULT=
 { isApproved: true, goto: 'tools' }
===TOOL CMD_FOO===
===AGENT NODE===
=RESPONSE= 
content: The Foo operation has been performed with the name "ASDF". 
tool_calls: []
===STATE NEXT===
 []
=LAST MSG=
 The Foo operation has been performed with the name "ASDF".
=LAST TOOL CALLS=
 []
===HUMAN PROMPT===
 Pls perform a Foo with name "ZXCV".
===AGENT NODE===
=RESPONSE= 
content:  
tool_calls: [ 'CMD_FOO' ]
===APPROVE NODE===
=INTERRUPT PRE=
===STATE NEXT===
 [ 'approve' ]
=LAST MSG=
 
=LAST TOOL CALLS=
 [
  {
    name: 'CMD_FOO',
    args: { name: 'ZXCV' },
    type: 'tool_call',
    id: 'call_kKF91c8G6enWwlrLFON8TYLJ'
  }
]
===HUMAN INTERRUPT RESPONSE===
 no
===APPROVE NODE===
=INTERRUPT PRE=
=INTERRUPT POST=
=RESULT=
 { isApproved: false, goto: '__end__' }
===STATE NEXT===
 []
=LAST MSG=
 
=LAST TOOL CALLS=
 [
  {
    name: 'CMD_FOO',
    args: { name: 'ZXCV' },
    type: 'tool_call',
    id: 'call_kKF91c8G6enWwlrLFON8TYLJ'
  }
]
===HUMAN PROMPT===
 Pls perform a Foo with name "GHJK".
===AGENT NODE===
file:///Users/user/code/lgdemo/node_modules/@langchain/core/dist/language_models/chat_models.js:64
        return chatGeneration.message;
                              ^

TypeError: Cannot read properties of undefined (reading 'message')
    at ChatOpenAI.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/language_models/chat_models.js:64:31)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async RunnableCallable.agentNode [as func] (file:///Users/user/code/lgdemo//test.js:51:20)
    at async RunnableCallable.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/utils.js:79:27)
    at async RunnableSequence.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/runnables/base.js:1274:33)
    at async _runWithRetry (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/retry.js:67:22)
    at async PregelRunner._executeTasksWithRetry (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/runner.js:217:33)
    at async PregelRunner.tick (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/runner.js:45:40)
    at async CompiledStateGraph._runLoop (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/index.js:1296:17)
    at async createAndRunLoop (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/index.js:1195:17) {
  pregelTaskId: '7bd60c12-4beb-54b7-85a7-9bc1461600f5'
}

Node.js v23.3.0

YOLOv8 Hand Detection Fails at Close Range After TensorFlow.js Conversion

I’m using YOLOv8 for real-time hand detection in a web app. The model works well in Python, but after converting it to TensorFlow.js, detection struggles when the hand is too close to the webcam—sometimes missing it entirely or misplacing the bounding box.

I preprocess frames by resizing them to 640×640 and apply a Kalman filter for smoothing. The issue seems related to scale variation, but it only appears after the TensorFlow.js conversion.

Here is how i did my convertion

import os
from ultralytics import YOLO
import shutil
import tensorflow as tf
from google.colab import files as colab_files

def find_saved_model(base_path):
    """Find the SavedModel directory in the export path"""
    for root, dirs, filenames in os.walk(base_path):
        if 'saved_model.pb' in filenames:
            return root
    return None

def add_signatures(saved_model_dir):
    """Load the SavedModel and add required signatures"""
    print("Adding signatures to SavedModel...")

    # Load the model
    model = tf.saved_model.load(saved_model_dir)

    # Create a wrapper function that matches the model's interface
    @tf.function(input_signature=[
        tf.TensorSpec(shape=[1, 640, 640, 3], dtype=tf.float32, name='images')
    ])
    def serving_fn(images):
        # Pass False for training parameter
        return model(images, False, None)

    # Convert the model
    concrete_func = serving_fn.get_concrete_function()

    # Create a new SavedModel with the signature
    tf.saved_model.save(
        model,
        saved_model_dir,
        signatures={
            'serving_default': concrete_func
        }
    )

    print("Signatures added successfully")
    return saved_model_dir

def convert_to_tfjs(pt_model_path, output_dir):
    """
    Convert a PyTorch YOLO model to TensorFlow.js format

    Args:
        pt_model_path (str): Path to the .pt file
        output_dir (str): Directory to save the converted model
    """
    try:
        # Ensure output directory exists
        os.makedirs(output_dir, exist_ok=True)

        # Load the model
        print(f"Loading YOLO model from {pt_model_path}...")
        model = YOLO(pt_model_path)

        # First export to TensorFlow format
        print("Exporting to TensorFlow format...")


        success = model.export(
            format='saved_model',
            imgsz=640,
            half=False,
            simplify=True
        )

        # Find the SavedModel directory
        saved_model_dir = find_saved_model(os.path.join(os.getcwd(), "best_saved_model"))
        if not saved_model_dir:
            raise Exception(f"Cannot find SavedModel directory in {os.path.dirname(pt_model_path)}")

        print(f"Found SavedModel at: {saved_model_dir}")

        # Add signatures to the model
        saved_model_dir = add_signatures(saved_model_dir)

        # Convert to TensorFlow.js
        print("Converting to TensorFlow.js format...")
        tfjs_target_dir = os.path.join(output_dir, 'tfjs_model')

        # Ensure clean target directory
        if os.path.exists(tfjs_target_dir):
            shutil.rmtree(tfjs_target_dir)
        os.makedirs(tfjs_target_dir)

        # Try conversion with modified parameters
        conversion_command = (
            f"tensorflowjs_converter "
            f"--input_format=tf_saved_model "
            f"--output_format=tfjs_graph_model "
            f"--saved_model_tags=serve "
            f"--control_flow_v2=True "
            f"'{saved_model_dir}' "
            f"'{tfjs_target_dir}'"
        )

        print(f"Running conversion command: {conversion_command}")
        result = os.system(conversion_command)

        if result != 0:
            raise Exception("TensorFlow.js conversion failed")

        # Verify conversion
        if not os.path.exists(os.path.join(tfjs_target_dir, 'model.json')):
            raise Exception("TensorFlow.js conversion failed - model.json not found")

        print(f"Successfully converted model to TensorFlow.js format")
        print(f"Output saved to: {tfjs_target_dir}")

        # Print model files
        print("nConverted model files:")
        for filename in os.listdir(tfjs_target_dir):  # Renamed 'file' to 'filename'
            print(f"- {filename}")

        # Create a zip file of the converted model
        zip_path = f"{tfjs_target_dir}.zip"
        shutil.make_archive(tfjs_target_dir, 'zip', tfjs_target_dir)

        # Download the zip file using the renamed colab_files module
        colab_files.download(zip_path)

    except Exception as e:
        print(f"Error during conversion: {str(e)}")
        print("nDebug information:")
        print(f"Current working directory: {os.getcwd()}")
        print(f"PT model exists: {os.path.exists(pt_model_path)}")
        if 'saved_model_dir' in locals():
            print(f"SavedModel directory exists: {os.path.exists(saved_model_dir)}")
            if os.path.exists(saved_model_dir):
                print("SavedModel contents:")
                for root, dirs, filenames in os.walk(saved_model_dir):  # Renamed 'files' to 'filenames'
                    print(f"nDirectory: {root}")
                    for filename in filenames:  # Renamed 'f' to 'filename'
                        print(f"  - {filename}")
        raise

# Usage
from google.colab import files as colab_files  # Use consistent naming
uploaded = colab_files.upload()
pt_model_path = next(iter(uploaded.keys()))
output_dir = "converted_model"
convert_to_tfjs(pt_model_path, output_dir)

My hand pose detection web app

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Real-time Hand Pose Detection</title>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script>
    <style>
        body { 
            text-align: center; 
            font-family: Arial, sans-serif;
            margin: 0;
            padding: 20px;
            background: #f0f0f0;
        }
        .container {
            position: relative;
            width: 640px;
            height: 480px;
            margin: 20px auto;
        }
        video, canvas { 
            position: absolute;
            left: 0;
            top: 0;
        }
        button {
            margin: 10px;
            padding: 10px 20px;
            font-size: 16px;
            cursor: pointer;
            background: #007bff;
            color: white;
            border: none;
            border-radius: 4px;
        }
        button:hover {
            background: #0056b3;
        }
        #status {
            padding: 10px;
            background: #fff;
            border-radius: 4px;
            display: inline-block;
        }
    </style>
</head>
<body>
    <h1>Real-time Hand Pose Detection (YOLOv8)</h1>
    <button onclick="loadModel()">Load Model</button>
    <button onclick="startWebcam()">Start Webcam</button>
    <p id="status">Model not loaded</p>

    <div class="container">
        <video id="video" width="640" height="480" autoplay></video>
        <canvas id="canvas" width="640" height="480"></canvas>
    </div>

    <script type="module">
        // Kalman Filter Implementation
        class KalmanFilter {
            constructor(stateSize, measurementSize, processNoise = 0.001, measurementNoise = 0.1) {
                this.state = new Array(stateSize).fill(0);         // State vector [x, y, vx, vy]
                this.covariance = new Array(stateSize * stateSize).fill(0);
                this.processNoise = processNoise;
                this.measurementNoise = measurementNoise;
                this.stateSize = stateSize;
                this.measurementSize = measurementSize;

                // Initialize covariance matrix with high uncertainty
                for (let i = 0; i < stateSize; i++) {
                    this.covariance[i * stateSize + i] = 1000;
                }
            }

            predict(dt = 1/30) {
                // State transition matrix
                const F = new Array(this.stateSize * this.stateSize).fill(0);
                for (let i = 0; i < this.stateSize/2; i++) {
                    F[i * this.stateSize + i] = 1;
                    F[i * this.stateSize + (i + this.stateSize/2)] = dt;
                    F[(i + this.stateSize/2) * this.stateSize + (i + this.stateSize/2)] = 1;
                }

                // Predict state
                const newState = new Array(this.stateSize).fill(0);
                for (let i = 0; i < this.stateSize; i++) {
                    for (let j = 0; j < this.stateSize; j++) {
                        newState[i] += F[i * this.stateSize + j] * this.state[j];
                    }
                }
                this.state = newState;

                // Predict covariance
                const newCovariance = new Array(this.stateSize * this.stateSize).fill(0);
                for (let i = 0; i < this.stateSize; i++) {
                    for (let j = 0; j < this.stateSize; j++) {
                        for (let k = 0; k < this.stateSize; k++) {
                            newCovariance[i * this.stateSize + j] += 
                                F[i * this.stateSize + k] * this.covariance[k * this.stateSize + j];
                        }
                    }
                }

                // Add process noise
                for (let i = 0; i < this.stateSize; i++) {
                    newCovariance[i * this.stateSize + i] += this.processNoise;
                }

                this.covariance = newCovariance;
            }

            update(measurement) {
                // Measurement matrix
                const H = new Array(this.measurementSize * this.stateSize).fill(0);
                for (let i = 0; i < this.measurementSize; i++) {
                    H[i * this.stateSize + i] = 1;
                }

                // Calculate Kalman gain
                const S = new Array(this.measurementSize * this.measurementSize).fill(0);
                for (let i = 0; i < this.measurementSize; i++) {
                    for (let j = 0; j < this.measurementSize; j++) {
                        for (let k = 0; k < this.stateSize; k++) {
                            S[i * this.measurementSize + j] += 
                                H[i * this.stateSize + k] * this.covariance[k * this.stateSize + j];
                        }
                    }
                    S[i * this.measurementSize + i] += this.measurementNoise;
                }

                const K = new Array(this.stateSize * this.measurementSize).fill(0);
                for (let i = 0; i < this.stateSize; i++) {
                    for (let j = 0; j < this.measurementSize; j++) {
                        for (let k = 0; k < this.stateSize; k++) {
                            K[i * this.measurementSize + j] += 
                                this.covariance[i * this.stateSize + k] * H[j * this.stateSize + k];
                        }
                        K[i * this.measurementSize + j] /= S[j * this.measurementSize + j];
                    }
                }

                // Update state
                const innovation = new Array(this.measurementSize).fill(0);
                for (let i = 0; i < this.measurementSize; i++) {
                    innovation[i] = measurement[i];
                    for (let j = 0; j < this.stateSize; j++) {
                        innovation[i] -= H[i * this.stateSize + j] * this.state[j];
                    }
                }

                for (let i = 0; i < this.stateSize; i++) {
                    for (let j = 0; j < this.measurementSize; j++) {
                        this.state[i] += K[i * this.measurementSize + j] * innovation[j];
                    }
                }

                // Update covariance
                const newCovariance = new Array(this.stateSize * this.stateSize).fill(0);
                for (let i = 0; i < this.stateSize; i++) {
                    for (let j = 0; j < this.stateSize; j++) {
                        newCovariance[i * this.stateSize + j] = this.covariance[i * this.stateSize + j];
                        for (let k = 0; k < this.measurementSize; k++) {
                            newCovariance[i * this.stateSize + j] -= 
                                K[i * this.measurementSize + k] * H[k * this.stateSize + j] * this.covariance[i * this.stateSize + j];
                        }
                    }
                }
                this.covariance = newCovariance;
            }

            getState() {
                return this.state.slice(0, this.measurementSize);
            }
        }

        let model;
        let video = document.getElementById("video");
        let canvas = document.getElementById("canvas");
        let ctx = canvas.getContext("2d");

        const CONF_THRESHOLD = 0.75;
        const IOU_THRESHOLD = 0.1;
        let isProcessing = false;
        let previousDetections = [];

        // Initialize Kalman filters
        let bboxFilter = new KalmanFilter(8, 4, 0.005, 0.2); // State: [x, y, w, h, vx, vy, vw, vh]
        let keypointFilter = new KalmanFilter(4, 2, 0.005, 0.2); // State: [x, y, vx, vy]
        let lastFrameTime = performance.now();

        // Model input size constants
        const MODEL_WIDTH = 640;
        const MODEL_HEIGHT = 640;
        const SCALE_FACTOR = 1.8;

        async function loadModel() {
            try {
                document.getElementById("status").innerText = "Loading model...";
                model = await tf.loadGraphModel('http://localhost:8000/model.json');
                document.getElementById("status").innerText = "Model loaded!";
                console.log("Model loaded successfully");
            } catch (error) {
                console.error("Error loading model:", error);
                document.getElementById("status").innerText = "Error loading model!";
            }
        }

        async function startWebcam() {
            if (!model) {
                alert("Please load the model first!");
                return;
            }

            try {
                const stream = await navigator.mediaDevices.getUserMedia({ 
                    video: { 
                        width: { ideal: 640 },
                        height: { ideal: 480 },
                        facingMode: 'user'
                    } 
                });
                video.srcObject = stream;
                video.onloadedmetadata = () => {
                    video.play();
                    processVideoFrame();
                };
            } catch (err) {
                console.error("Error accessing webcam:", err);
                document.getElementById("status").innerText = "Error accessing webcam!";
            }
        }

        async function processVideoFrame() {
            if (!model || !video.videoWidth || isProcessing) return;
            
            try {
                isProcessing = true;
                
                const offscreenCanvas = document.createElement('canvas');
                offscreenCanvas.width = MODEL_WIDTH;
                offscreenCanvas.height = MODEL_HEIGHT;
                const offscreenCtx = offscreenCanvas.getContext('2d');
                
                const scale = Math.min(MODEL_WIDTH / video.videoWidth, MODEL_HEIGHT / video.videoHeight);
                const scaledWidth = video.videoWidth * scale;
                const scaledHeight = video.videoHeight * scale;
                const offsetX = (MODEL_WIDTH - scaledWidth) / 2;
                const offsetY = (MODEL_HEIGHT - scaledHeight) / 2;
                
                offscreenCtx.fillStyle = 'black';
                offscreenCtx.fillRect(0, 0, MODEL_WIDTH, MODEL_HEIGHT);
                offscreenCtx.drawImage(video, offsetX, offsetY, scaledWidth, scaledHeight);
                
                const imgTensor = tf.tidy(() => {
                    return tf.browser.fromPixels(offscreenCanvas)
                        .expandDims(0)
                        .toFloat()
                        .div(255.0);
                });
        
                const predictions = await model.predict(imgTensor);
                imgTensor.dispose();
                
                const processedDetections = await processDetections(predictions, {
                    offsetX,
                    offsetY,
                    scale,
                    originalWidth: video.videoWidth,
                    originalHeight: video.videoHeight
                });
                
                const smoothedDetections = smoothDetections(processedDetections);
                drawDetections(smoothedDetections);
                
                previousDetections = smoothedDetections;
                
                if (Array.isArray(predictions)) {
                    predictions.forEach(p => p.dispose());
                } else {
                    predictions.dispose();
                }
                
            } catch (error) {
                console.error("Error in processing frame:", error);
            } finally {
                isProcessing = false;
                requestAnimationFrame(processVideoFrame);
            }
        }

        async function processDetections(predictionTensor, transformInfo) {
            const predictions = await predictionTensor.array();
            
            if (!predictions.length || !predictions[0].length) {
                return [];
            }
            
            let detections = [];
            const numDetections = predictions[0][0].length;
            
            for (let i = 0; i < numDetections; i++) {
                const confidence = predictions[0][4][i];
                
                if (confidence > CONF_THRESHOLD) {
                    let x = (predictions[0][0][i] - transformInfo.offsetX) / transformInfo.scale;
                    let y = (predictions[0][1][i] - transformInfo.offsetY) / transformInfo.scale;
                    let width = (predictions[0][2][i] / transformInfo.scale) * SCALE_FACTOR;
                    let height = (predictions[0][3][i] / transformInfo.scale) * SCALE_FACTOR;
                    
                    let kp_x = (predictions[0][5][i] - transformInfo.offsetX) / transformInfo.scale;
                    let kp_y = (predictions[0][6][i] - transformInfo.offsetY) / transformInfo.scale;
                    
                    x = x / transformInfo.originalWidth;
                    y = y / transformInfo.originalHeight;
                    width = width / transformInfo.originalWidth;
                    height = height / transformInfo.originalHeight;
                    kp_x = kp_x / transformInfo.originalWidth;
                    kp_y = kp_y / transformInfo.originalHeight;
                    
                    x = Math.max(0, Math.min(1, x));
                    y = Math.max(0, Math.min(1, y));
                    kp_x = Math.max(0, Math.min(1, kp_x));
                    kp_y = Math.max(0, Math.min(1, kp_y));
                    
                    detections.push({
                        bbox: [x, y, width, height],
                        confidence,
                        keypoint: [kp_x, kp_y]
                    });
                }
            }
            
            return applyNMS(detections);
        }

        function smoothDetections(currentDetections) {
            const currentTime = performance.now();
            const dt = (currentTime - lastFrameTime) / 1000; // Convert to seconds
            lastFrameTime = currentTime;

            return currentDetections.map(detection => {
                // Predict next state
                bboxFilter.predict(dt);
                keypointFilter.predict(dt);

                // Update with new measurements
                const [x, y, width, height] = detection.bbox;
                bboxFilter.update([x, y, width, height]);

                const [kpX, kpY] = detection.keypoint;
                keypointFilter.update([kpX, kpY]);

                // Get filtered states
                const filteredBbox = bboxFilter.getState();
                const filteredKeypoint = keypointFilter.getState();

                return {
                    bbox: filteredBbox,
                    confidence: detection.confidence,
                    keypoint: filteredKeypoint
                };
            });
        }

        function calculateIoU(box1, box2) {
            const [x1, y1, w1, h1] = box1;
            const [x2, y2, w2, h2] = box2;
            
            const x1min = x1 - w1/2;
            const x1max = x1 + w1/2;
            const y1min = y1 - h1/2;
            const y1max = y1 + h1/2;
            
            const x2min = x2 - w2/2;
            const x2max = x2 + w2/2;
            const y2min = y2 - h2/2;
            const y2max = y2 + h2/2;
            
            const xOverlap = Math.max(0, Math.min(x1max, x2max) - Math.max(x1min, x2min));
            const yOverlap = Math.max(0, Math.min(y1max, y2max) - Math.max(y1min, y2min));
            
            const intersectionArea = xOverlap * yOverlap;
            const union = w1 * h1 + w2 * h2 - intersectionArea;
            
            return intersectionArea / union;
        }

        async function applyNMS(detections) {
            detections.sort((a, b) => b.confidence - a.confidence);
            
            const selected = [];
            const active = new Set(Array(detections.length).keys());
            
            for (let i = 0; i < detections.length; i++) {
                if (!active.has(i)) continue;
                
                selected.push(detections[i]);
                
                for (let j = i + 1; j < detections.length; j++) {
                    if (!active.has(j)) continue;
                    
                    const iou = calculateIoU(detections[i].bbox, detections[j].bbox);
                    if (iou >= IOU_THRESHOLD) active.delete(j);
                }
            }
            
            return selected;
        }

        function drawDetections(detections) {
            ctx.clearRect(0, 0, canvas.width, canvas.height);
            ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
            
            detections.forEach(detection => {
                const [x, y, width, height] = detection.bbox;
                const [keypointX, keypointY] = detection.keypoint;
                
                // Convert normalized coordinates to pixel values
                const boxX = (x - width/2) * canvas.width;
                const boxY = (y - height/2) * canvas.height;
                const boxWidth = width * canvas.width;
                const boxHeight = height * canvas.height;
                
                // Draw bounding box
                ctx.strokeStyle = 'red';
                ctx.lineWidth = 2;
                ctx.strokeRect(boxX, boxY, boxWidth, boxHeight);
                
                // Draw keypoint
                const kpX = keypointX * canvas.width;
                const kpY = keypointY * canvas.height;
                
                ctx.fillStyle = 'blue';
                ctx.beginPath();
                ctx.arc(kpX, kpY, 5, 0, 2 * Math.PI);
                ctx.fill();
                
                // Draw confidence score
                ctx.fillStyle = 'red';
                ctx.font = '14px Arial';
                ctx.fillText(`Conf: ${detection.confidence.toFixed(2)}`, boxX, boxY - 5);
            });
        }

        window.loadModel = loadModel;
        window.startWebcam = startWebcam;
    </script>
</body>
</html>

Something i tried was adjusting bounding box scaling, tuning IoU and confidence thresholds.

“Error: packages field is not an array when using pnpm-workspace.yaml in a monorepo setup”

I’m working with a PNPM monorepo setup, and everything was working fine when I was using pnpm.yaml, but after switching to pnpm-workspace.yaml, I’m encountering the following error:

ERROR  packages field is not an array
For help, run: pnpm help run

Here’s my current directory structure:

root-directory/
  ├── packages/
  │    └── main/
  │        ├── index.js
  │        └── package.json
  ├── package.json
  └── pnpm-workspace.yaml

Root package.json:

{
  "name": "@levelup/root",
  "version": "1.0.0",
  "scripts": {
    "main:start": "pnpm -F @levelup/main run start",
    "test": "echo "Error: no test specified" && exit 1"
  },
  "packageManager": "[email protected]"
}

pnpm-workspace.yaml:

packages:
  - "packages/**"