How to dispatch same action inside axios interceptor after making new api call through interceptor

I have been using request and response interceptors.

Request interceptor – it is being used to put access token inside Authorization header for every request.

Response intercepor – it is being used to check if the response was “Unauthorized” with the status code of 401 it is to check whether the access token is expired if access token is expired then refresh the access token (by making an API call) and after getting the new access token make that original request again with the new access token this part is working perfectly fine.

Dispatching actions – I have been using redux toolkit to store the state of my application
so what happens when I make the original request with the new access token the result gets returned but I want to dispatch the same action that was previously called for that original request so that I can store the result of the new API call inside redux

it would have been no issue if there was only one dispatch action but there are multiples as I have many components and each component has its own dispatch action.

how do I write code to make sure that new API call with the new access token dispatches the same action which was canceled due to the 401 error


import axios from "axios";
import { api } from "../config";
import { refreshAccessToken } from "./fakebackend_helper";


axios.defaults.withCredentials = true;

axios.defaults.baseURL = api.API_URL;

axios.defaults.headers.post["Content-Type"] = "application/json";

axios.interceptors.request.use(function (config) {
    const latestToken = localStorage.getItem("access_token"); // Fetch the latest token
    if (latestToken) {
      config.headers.Authorization = `Bearer ${latestToken}`;
    }
    return config;
  });


axios.interceptors.response.use(
  function (response) {


    return response.data ? response.data : response;
  },
  async function (error) {

    try {
      const errorResponse = error.response;

      if (
        errorResponse.status === 401 &&
        errorResponse.data.message === "Unauthorized" &&
        !error.config._retry
      ) {
        const res = await refreshAccessToken();
        const newAccessToken = res.data.access_token;
        error.config._retry = true;

        axios.defaults.headers.common[
          "Authorization"
        ] = `Bearer ${newAccessToken}`;

        localStorage.setItem("access_token", newAccessToken);
        sessionStorage.setItem("access_token", newAccessToken);

        error.config.headers["Authorization"] = `Bearer ${newAccessToken}`;

        console.log("ERROR RE-REQUEST CONFIG ->", error.config);

        return axios(error.config);
      }

      return Promise.reject(error);
    } catch (error) {
      console.log("Error while re-requesting the access token ->", error);
      return Promise.reject(error); //
    }
  }
);

Here is the code of one of the reducers


import { createSlice } from "@reduxjs/toolkit";
import {
  inviteWorkspaceMember,
  setPasswordWorkspaceMember,
  getWorkspaceMembers,
} from "./thunk";

export const initialState = {
  workspaceMembers: [],
  error: "",
};

const workspaceMembersSlice = createSlice({
  name: "workspaceMembers",
  initialState,
  reducers: {},
  extraReducers: (builder) => {
    builder.addCase(getWorkspaceMembers.fulfilled, (state, action) => {
      state.workspaceMembers = action.payload.workspaceMembers;
    });
    builder.addCase(inviteWorkspaceMember.fulfilled, (state, action) => {
      state.workspaceMembers = [...state.workspaceMembers, action.payload.data];

     
    });
   
  },
});

export default workspaceMembersSlice.reducer;

Here is the thunk

import { createAsyncThunk } from "@reduxjs/toolkit";

import {
  getWorkspaceMembers as getWorkspaceMembersApi,
  inviteWorkspaceMember as inviteWorkspaceMemberApi,
  setPasswordWorkspaceMember as setPasswordWorkspaceMemberApi,
} from "../../helpers/fakebackend_helper";

export const getWorkspaceMembers = createAsyncThunk(
  "workspaceMembers/getWorkspaceMembers",
  async (workspaceId) => {
    try {
      const response = await getWorkspaceMembersApi(workspaceId);

      return response;
    } catch (error) {
      console.log("error inside getting workspace members thunk", error);
    }
  }
);


Form submission triggers CORs error as fetch [duplicate]

This is not a duplicate of Why does my JavaScript code receive a “No ‘Access-Control-Allow-Origin’ header is present on the requested resource” error, while Postman does not? because none of the answers to that question fix this issue. Many of the answers to that question just describes CORs and many of the answers might as well be copy and pasted from wikipedia and do not help at all. I am not asking for a description of CORs or how to generally enable it, I am asking about a specific technical issue with a minimal reproducible example.

I am attempting to upload some data to S3 using a pre-signed post request. In doing this I have hit a CORs error:

Access to fetch at ‘http://localhost:4566/17acvaclgm-pictures’ from origin ‘http://127.0.0.1:4006’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

I wrote a simple example doing the upload with a form:

<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
    <title>HTML 5 Boilerplate</title>
  </head>
  <body>
    <form
      action="http://localhost:4566/17acvaclgm-pictures"
      method="post"
      enctype="multipart/form-data"
    >
      <input type="hidden" name="key" value="browserObject" />
      <input type="file" name="file" />
      <input type="submit" value="Upload" />
    </form>
  </body>
</html>

This worked returning:

HTTP/1.1 204 NO CONTENT
Server: TwistedWeb/24.3.0
Date: Sun, 26 Jan 2025 05:53:45 GMT
Access-Control-Allow-Origin: *
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Access-Control-Allow-Methods: GET, HEAD, PUT, POST, DELETE
Location: http://17acvaclgm-pictures.s3.localhost.localstack.cloud:4566/browserObject
ETag: "f520dae5b605d9c92072c67d1ae2b8a7"
x-amz-server-side-encryption: AES256
x-amz-request-id: f5719cde-db04-498a-b8e3-d13a46f11e41
x-amz-id-2: s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=

So to attempt to fix my attempt I clicked Copy as fetch giving:

fetch("http://localhost:4566/17acvaclgm-pictures", {
  "headers": {
    "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
    "accept-language": "en-GB,en-US;q=0.9,en;q=0.8",
    "cache-control": "max-age=0",
    "content-type": "multipart/form-data; boundary=----WebKitFormBoundary9m2KrDP2o5TzQkvt",
    "sec-ch-ua": ""Google Chrome";v="131", "Chromium";v="131", "Not_A Brand";v="24"",
    "sec-ch-ua-mobile": "?0",
    "sec-ch-ua-platform": ""Windows"",
    "sec-fetch-dest": "document",
    "sec-fetch-mode": "navigate",
    "sec-fetch-site": "cross-site",
    "sec-fetch-user": "?1",
    "upgrade-insecure-requests": "1"
  },
  "referrer": "http://127.0.0.1:4040/",
  "referrerPolicy": "strict-origin-when-cross-origin",
  "method": "POST",
  "mode": "cors",
  "credentials": "omit"
});

I then updated referrer and added the body with my form data:

let formData = new FormData();
// ...
fetch("http://localhost:4566/17acvaclgm-pictures", {
  "headers": {
    // ...
  },
  // ...
  referrer: "http://127.0.0.1:4006/"
  // ...
  body: formData
});

I then ran this, and received the same error.

I checked the actual request headers and found despite copying that the request from fetch had different headers:

accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
accept-language:
en-GB,en-US;q=0.9,en;q=0.8
cache-control:
max-age=0
content-type:
multipart/form-data; boundary=----WebKitFormBoundaryIjvYSjxNxnzfxn11
referer:
http://127.0.0.1:4006/
sec-ch-ua:
"Google Chrome";v="131", "Chromium";v="131", "Not_A Brand";v="24"
sec-ch-ua-mobile:
?0
sec-ch-ua-platform:
"Windows"
upgrade-insecure-requests:
1
user-agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36

As opposed to the headers in the successful request:

accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
accept-encoding:
gzip, deflate, br, zstd
accept-language:
en-GB,en-US;q=0.9,en;q=0.8
cache-control:
max-age=0
connection:
keep-alive
content-length:
335341
content-type:
multipart/form-data; boundary=----WebKitFormBoundary9m2KrDP2o5TzQkvt
host:
localhost:4566
origin:
http://127.0.0.1:4040
referer:
http://127.0.0.1:4040/
sec-ch-ua:
"Google Chrome";v="131", "Chromium";v="131", "Not_A Brand";v="24"
sec-ch-ua-mobile:
?0
sec-ch-ua-platform:
"Windows"
sec-fetch-dest:
document
sec-fetch-mode:
navigate
sec-fetch-site:
cross-site
sec-fetch-user:
?1
upgrade-insecure-requests:
1
user-agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36

Notably here Origin is important to S3 as noted on https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors-troubleshooting.html and by https://stackoverflow.com/a/32887912/4301453. So I attempt to add this:

// ...
fetch("http://localhost:4566/17acvaclgm-pictures", {
  "headers": {
    // ...
  },
  // ...
  origin: "http://127.0.0.1:4006"
});

This also failed with the same CORs error.

What am I doing wrong here?

All this can be tested locally by running localstack start -d then settings the CORs settings then running the fetch and/or submitting the form.

Here is some Rust code which sets the CORs settings then starts a server serving the form:

use axum::http::header;
use axum::{routing::get, Router};
use s3::serde_types::{CorsConfiguration, CorsRule};
use s3::Bucket;
use std::process::Command;
use std::process::Stdio;
use std::time::Duration;

const BUCKET_NAME: &str = "17acvaclgm-pictures"; // bucket name
const MAX_SIZE: u32 = 4194304; // 4mb

#[tokio::main]
async fn main() {
    // Include object data.
    let data = include_bytes!("picture.webp");

    // Wait for localstack instance to be running.
    let _wait = Command::new("localstack")
        .args(["wait", "-t", &10u32.to_string()])
        .stdout(Stdio::inherit())
        .stderr(Stdio::inherit())
        .output()
        .unwrap();

    // Set AWS region.
    let region = s3::Region::Custom {
        region: String::from("us-east-1"),
        endpoint: String::from("http://localhost:4566"),
    };
    println!("region: {region:?}");

    // Set AWS credentials.
    let credentials = awscreds::Credentials {
        access_key: Some(String::from("test")),
        secret_key: Some(String::from("test")),
        security_token: None,
        session_token: None,
        expiration: None,
    };
    println!("credentials: {credentials:?}");

    // Create S3 bucket.
    let _create_bucket_response = s3::Bucket::create_with_path_style(
        BUCKET_NAME,
        region.clone(),
        credentials.clone(),
        s3::BucketConfiguration::public(),
    )
    .await
    .unwrap();
    let s3_bucket = s3::Bucket::new(BUCKET_NAME, region, credentials)
        .unwrap()
        .with_path_style();

    // Set permissive CORs configuration.
    let account_id = String::from("000000000000");
    let _cors_response = s3_bucket
        .put_bucket_cors(
            &account_id,
            &CorsConfiguration::new(vec![CorsRule::new(
                Some(vec![String::from("*")]),
                vec![
                    String::from("GET"),
                    String::from("HEAD"),
                    String::from("PUT"),
                    String::from("POST"),
                    String::from("DELETE"),
                ],
                vec![String::from("*")],
                None,
                None,
                None,
            )]),
        )
        .await
        .unwrap();

    // Upload object using pre-signed url outside browser.
    let name = "nonBrowserObject";
    let url = presign(name, s3_bucket.clone()).await;
    println!("url: {url}nkey: {name}");
    upload(name, &url, data).await;

    // Upload object using pre-signed url in browser by starting a server with a form upload using the pre-signed url.
    let name = "browserObject";
    let url = presign(name, s3_bucket.clone()).await;
    println!("url: {url}nkey: {name}");
    let index = format!(
        r#"
            <!DOCTYPE html>
            <html lang="en">
            <head>
                <meta charset="UTF-8">
                <meta name="viewport" content="width=device-width, initial-scale=1.0">
                <meta http-equiv="X-UA-Compatible" content="ie=edge">
                <title>HTML 5 Boilerplate</title>
            </head>
            <body>
                <form action="http://localhost:4566/{BUCKET_NAME}" method="post" enctype="multipart/form-data">
                    <input type="hidden" name="key" value="{name}" />
                    <input type="file" name="file" />
                    <input type="submit" value="Upload" />
                </form>
            </body>
            </html>
        "#,
    );
    let app = Router::new().route(
        "/",
        get(|| async {
            axum::response::Response::builder()
                .status(axum::http::StatusCode::OK)
                .header(
                    header::CONTENT_TYPE,
                    header::HeaderValue::from_static(mime::TEXT_HTML_UTF_8.as_ref()),
                )
                .body(axum::body::Body::from(index))
                .unwrap()
        }),
    );
    let listener = tokio::net::TcpListener::bind("0.0.0.0:4040").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

async fn presign(name: &str, s3_bucket: Box<Bucket>) -> String {
    let post_policy = s3::PostPolicy::new(s3::post_policy::PostPolicyExpiration::ExpiresIn(60))
        .condition(
            s3::PostPolicyField::Key,
            s3::PostPolicyValue::Exact(std::borrow::Cow::from(name.to_owned())),
        )
        .unwrap()
        .condition(
            s3::PostPolicyField::Bucket,
            s3::PostPolicyValue::Exact(std::borrow::Cow::from(s3_bucket.name())),
        )
        .unwrap()
        .condition(
            s3::PostPolicyField::ContentLengthRange,
            s3::PostPolicyValue::Range(0, MAX_SIZE),
        )
        .unwrap();
    let post = s3_bucket.presign_post(post_policy).await.unwrap();
    post.url
}
async fn upload(name: &str, url: &str, data: &[u8]) {
    tokio::time::sleep(Duration::from_secs(10)).await;
    let part = reqwest::multipart::Part::bytes(Vec::from(data))
        .file_name(name.to_owned())
        .mime_str("image/webp")
        .unwrap();
    let form = reqwest::multipart::Form::new()
        .text("key", name.to_owned())
        .text("bucket", BUCKET_NAME.to_string())
        .part("file", part);
    let response = reqwest::Client::new()
        .post(url)
        .multipart(form)
        .send()
        .await
        .unwrap();
    assert!(
        response.status().is_success(),
        "{}, {:?}",
        response.status(),
        response
            .text()
            .await
            .map(|s| s.chars().take(300).collect::<String>())
    );
}

where Cargo.toml is:

[package]
name = "testing"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1.42.0", features = ["full"] }
rust-s3 = { git="https://github.com/JonathanWoollett-Light/rust-s3", rev="35dcc91893ce3376b94c6e2e7e029afe43ea9e7c"}
aws-creds = "0.38.0"
reqwest = { version = "0.12.7", features = ["cookies", "json", "stream", "multipart"] }
axum = "0.8.1"
mime = "0.3.17"

Nodejs http.request() method is causing some unintended behaviour for keep-alive connections

I was learning about keep-alive headers in http,I decided to test it out myself,So i created a nodejs client using http agent and http.request() method in nodejs and a server.js file using nodejs only.But i am having a problem like when i make http request to my server using client.js file,the socket.on(‘close’) event is getting triggered immediately in server.js file even though it has a “keepAliveTimeout = 5000” But if i make the request using postman (with or wihtout connection:keep-alive header) the socket.on(‘close’) event is triggered after 5 seconds.This is causing me a big headache.

Server.js file:

const port=process.env.PORT || 5000;
const server=http.createServer((req,res)=>{
if(req.method==='GET'&&req.url==='/'){
req.on('close',()=>{
  console.log('request closed')
})
  console.log('get request arrived');
  console.log(req.headers);
  console.log(req.socket.timeout);
  res.writeHead(200,{
    'Content-Type':'text/plain',
    'Connection':'keep-alive',
  });
  res.on('finish', () => {
    console.log('Response finished. Checking socket state...');
    console.log('Socket writable:', req.socket.writable);
  });
  
  res.end('hello world');

  setTimeout(()=>{console.log('i am timeout')},5000);
  //when socket closes console.log
  req.socket.on('close',()=>{
    console.log('socket is closed');
  });

  req.socket.on('end', () => console.log('Socket ended'));
  
req.socket.on('timeout', () => console.log('Socket timeout'));
}
});
server.listen(port,()=>{
  console.log('server is listening in port:',port);
});
server.keepAliveTimeout = 5000; // Keep socket alive for 5 seconds after response 
server.headersTimeout = 60000; 
console.log('timeout',server.keepAliveTimeout)

and Client.js file:


const agent = new http.Agent({
  keepAlive: true,
  maxSockets: 5, // Limit concurrent sockets
  maxFreeSockets: 1, // Allow only 1 free socket
  keepAliveMsecs: 5000, 
  timeout: 5000, // Timeout for inactive sockets
});


const options = {
  agent: agent,
  hostname: 'localhost', // Use only hostname, not "http://"
  port: 5000,
  method: 'GET',
  path: '/',
  headers: {
    'Accept': '*/*',
    'Connection':'keep-alive' ,
    'Keep-Alive':'timeout=5'
    // Basic headers
  },
};

const request = http.request(options, (response) => {
  console.log('Response Status:', response.statusCode);
  console.log('Response Headers:', response.headers);

  response.on('data', (chunk) => {
    console.log('Response Body:', chunk.toString('utf-8'));
  });

  response.on('end', () => {
    console.log('No more data in response.');
  });
});

request.on('error', (error) => {
  console.error('Request error:', error);
});

request.end();

Wix velo javascript, I want to multiply two elements

I’m very new to Velo and Javascript. I have found some code and now I’m trying to modify it. The code below adds various elements together to create one number. It works, but I want to multiply

Number(web_price_Arr.reduce((a, b) => a + b, 0))

and

$w('#deliveryTimeSelectionTags').value.map(Number).reduce((a, b) => a + b, 0);

This is the code

 price = Number(location_Arr.reduce((a, b) => a + b, 0)) + Number(web_price_Arr.reduce((a, b) => a + b, 0)) + (Number(pages_Num_Arr[0]) * 0) +
 $w("#featuresCheckboxGroup").value.map(Number).reduce((a, b) => a + b, 0) + $w('#deliveryTimeSelectionTags').value.map(Number).reduce((a, b) => a + b, 0);

As I said, I’m very new, so I tried switching the pieces around so that I could exchange + for *. However, that broke the calculation. I also tried removing the .reduce((a, b) => a + b, 0)) of web_price_Arr so that I could add it to $w('#deliveryTimeSelectionTags').value.map(Number).reduce((a, b) => a + b, 0). I tried a couple other variations of those, but they didn’t work.

Thank you for helping me and let me know if I need to add more context.

Trouble with Bubble Sort in InDesign Script, an interesting trouble

I wrote a script to sort paragraphs in Adobe InDesign while preserving their formatting. I used a bubble sort algorithm. Although the script successfully retains the original formatting, it only performs one iteration, requiring multiple executions to complete the sorting.

I don’t care at all about the complexity of the algorithm or whether the program is incomplete, but since this program can get the correct result after running it multiple times, I try to directly loop some of its functions to achieve the same result.

I attempted to force the sorting by repeatedly calling the bubbleSortParagraphs(paraArray, reverseFlag) function for the number of paragraphs, but it didn’t work. I also tried looping the sortParagraphs(sortMethod, reverseFlag) function, but that didn’t succeed either. I am completely baffled as to why this is happening.

var pinyinSortOrder = {
    'A': 0, 'B': 1, 'C': 2, 'D': 3, 'E': 4, 'F': 5, 'G': 6, 'H': 7, 'I': 8, 'J': 9, 'K': 10, 'L': 11, 'M': 12, 'N': 13, 'O': 14, 'P': 15, 'Q': 16, 'R': 17, 'S': 18, 'T': 19, 'U': 20, 'V': 21, 'W': 22, 'X': 23, 'Y': 24, 'Z': 25,
    'a': 26, 'b': 27, 'c': 28, 'd': 29, 'e': 30, 'f': 31, 'g': 32, 'h': 33, 'i': 34, 'j': 35, 'k': 36, 'l': 37, 'm': 38, 'n': 39, 'o': 40, 'p': 41, 'q': 42, 'r': 43, 's': 44, 't': 45, 'u': 46, 'v': 47, 'w': 48, 'x': 49, 'y': 50, 'z': 51
};//a list to store the character order

function getCharWeight(ch) {
    if (pinyinSortOrder && pinyinSortOrder[ch] !== undefined) {
        return pinyinSortOrder[ch];
    } else {
        return ch.charCodeAt(0) + 100000;
    }
}

function compareString(a, b) {
    var arrA = a.split("");
    var arrB = b.split("");
    var minLen = Math.min(arrA.length, arrB.length);
    for (var i = 0; i < minLen; i++) {
        var diff = getCharWeight(arrA[i]) - getCharWeight(arrB[i]);
        if (diff !== 0) {
            return diff;
        }
    }
    return arrA.length - arrB.length;
}

function getSafeInsertionPoint(aStory, desiredIndex) {
    var count = aStory.insertionPoints.length;
    if (count === 0) return null;
    if (desiredIndex < 0) desiredIndex = 0;
    if (desiredIndex >= count) desiredIndex = count - 1;
    return aStory.insertionPoints[desiredIndex];
}

showDialog()

function showDialog() {
    var dlg = app.dialogs.add({ name: "Paragraph Sorting (Bubble Multiple Rounds)" });
    var dropdownSortMethod, chkReverse;

    with (dlg.dialogColumns.add()) {
        // Sorting Method
        with (dialogRows.add()) {
            staticTexts.add({ staticLabel: "Sorting Method:" });
            dropdownSortMethod = dropdowns.add({
                stringList: ["Ignore Formatting (Plain Text)", "Preserve Formatting (Bubble Swap)"],
                selectedIndex: 0
            });
        }
        // Reverse Order
        with (dialogRows.add()) {
            chkReverse = checkboxControls.add({
                staticLabel: "Reverse Order",
                checkedState: false
            });
        }
    }

    var confirmed = dlg.show();
    if (confirmed) {
        var method = dropdownSortMethod.selectedIndex;
        var reverseFlag = chkReverse.checkedState;
        dlg.destroy();
        sortParagraphs(method, reverseFlag);
    } else {
        dlg.destroy();
    }
}

/* =========== 5. Sorting Core =========== */
function sortParagraphs(sortMethod, reverseFlag) {
    var sel = app.selection[0];
    var myParagraphs = sel.paragraphs;

    // Collect Paragraphs
    var paraArray = [];
    for (var i = 0; i < myParagraphs.length; i++) {
        // If you need to skip empty paragraphs, you can check contents here
        paraArray.push(myParagraphs[i]);
    }

    // (B) Preserve Formatting: Use bubble sort multiple rounds
    bubbleSortParagraphs(paraArray, reverseFlag);
    alert("Preserve Formatting (Bubble): Sorting Completed!");
}

/* =========== 6. Bubble Sort (Multiple Rounds) ===========
   Each round starts from index=0 to n-2-i, comparing adjacent [j] and [j+1], swap if order is incorrect.
   swapped=false indicates no swap in this round => all sorted => break.
*/
function bubbleSortParagraphs(paraArray, reverseFlag) {
    var n = paraArray.length;
    for (var i = 0; i < n - 1; i++) {
        var swapped = false;

        for (var j = 0; j < n - 1 - i; j++) {
            var txtA = paraArray[j].contents;
            var txtB = paraArray[j+1].contents;
            var diff = compareString(txtA, txtB);

            // Ascending: diff>0 => swap; Descending: diff<0 => swap
            var needSwap = ((!reverseFlag && diff > 0) || (reverseFlag && diff < 0));
            if (needSwap) {
                paraArray[j+1].move(LocationOptions.BEFORE, paraArray[j]);
                // Also swap in the array
                var temp = paraArray[j];
                paraArray[j] = paraArray[j+1];
                paraArray[j+1] = temp;
                swapped = true;
            }
        }

        if (!swapped) {
            break;
        }
    }
}

I had even tried something like:

for (var i = 0; i < myParagraphs.length; i++) 
{ 
    var paraArray = []; 
    for (var i = 0; i < myParagraphs.length; i++) 
    { 
        paraArray.push(myParagraphs[i]); 
    } 
    bubbleSortParagraphs(paraArray, reverseFlag); 
} 

and it just doesn’t work.
the app.select is a function in indesign, to get the paragraph select by the mouse.
what I put in to the script is

Haywire(X) 237
Crippling(X) 234
Recharge 238
Toxic(X) 240
Defensive 234
RazorSharp 238
Storm 240
Tainted 240
Unwieldy 241

and it output is

Crippling(X) 234
Haywire(X) 237
Recharge 238
Defensive 234
RazorSharp 238
Storm 240
Tainted 240
Toxic(X) 240
Unwieldy 241

it should be (and I can not delete the other code, because if I “reduce the code to only the necessary code”, the code will work well), if I “strip all that code and for this question call the bubble sort directly with initialised data.” it just go right like the below thing:

Crippling(X) 234
Defensive 234
Haywire(X) 237
RazorSharp 238
Recharge 238
Storm 240
Tainted 240
Toxic(X) 240
Unwieldy 241

Issue with Apps Script batchUpdate method. Possible client-side bug

I’m invoking Google Apps Script batchUpdate method from its Advanced Service and there is a error message I’m getting that doesn’t make any sense.

I’m creating a request by:

const arr         = Array(7).fill().map(() => Array(996));
const sheetID     = //put sheet ID here
const worksheetID = //put worksheet ID here

requests.push(
    {//Array
      updateCells: {
        range: 
        { 
          sheetId:          sheetID , 
          startRowIndex:    startRow - 1,
          startColumnIndex: startColumn - 1,
          endColumnIndex:   startColumn + arr[0].length
        },
        rows: arr.map(row => ({ values: row.map(element => ({ userEnteredValue: (isNaN(element) || element == "" ? { stringValue: element } : { numberValue: element }) })) })),
        fields: "userEnteredValue"
      }
    }
  );

Sheets.Spreadsheets.batchUpdate({ requests: requests}, worksheetID);

Note that at this point, arr is a 7×996 array empty array. StartRow is 5 and StartColumn is 2. Also, the dimensions are hard-coded here but in the actual code, I call a getDisplayValues() of the full range of the sheet starting at startRow and startColumn. I’m writing this value to a sheet with 1000 rows. Row 5 of the sheet has values written to it, but the result of the request above should overwrite those values. I’m getting the following error:

GoogleJsonResponseException: API call to sheets.spreadsheets.batchUpdate failed with error: Invalid requests[8].updateCells: Attempting to write row: 138, beyond the last requested row of: 137

Which doesn’t make sense because there are 1000 rows in the sheet. Furthermore, I’m making multiple updateCell requests and the one right before this one is working with no issues and has a similar makeup to the current sheet with the problem. I’m wondering if there is some server-side error bug The intention of the request is the “clear” the sheet. This post is similar to my issue, but the proposed solutions seem nonsensical. I see no logical reason why I should be getting this error.

TypeError: attacher.call is not a function

I have a function called in a dynamic route to pull markdown from a .md file, parse it, and render it as html content.

import fs from 'fs';
import path from 'path';
import matter from 'gray-matter';
import { remark } from 'remark';
import html from 'remark-html';
import rehypeParse from 'rehype-parse';
import rehypeStringify from 'rehype-stringify';
import { unified } from 'unified';
import { visit } from 'unist-util-visit';
import remarkPrism from 'remark-prism'; // For syntax highlighting

function rehypeWrapPreBlocks() {
  return (tree) => {
    visit(tree, 'element', (node, index, parent) => {
      if (node.tagName === 'pre') {
        const wrapperNode = {
          type: 'element',
          tagName: 'div',
          properties: { className: ['code-block-wrapper'] },
          children: [node],
        };
        parent.children[index] = wrapperNode;
      }
    });
  };
}

const postsDirectory = path.join(process.cwd(), '/src/app/content/pages');

export async function getPageContent(id) {
  const fullPath = path.join(postsDirectory, `${id}.md`);

  const fileContents = fs.readFileSync(fullPath, 'utf8');

  const matterResult = matter(fileContents);

  const processedContent = await unified()
    .use(remark)
    .use(remarkPrism)
    .use(html)
    .use(rehypeParse, { fragment: true })
    .use(rehypeWrapPreBlocks)
    .use(rehypeStringify)
    .process(matterResult.content);

  const contentHtml = processedContent.toString();

  return {
    id,
    contentHtml,
    ...matterResult.data,
  };
}

this is called through this api

import { NextResponse } from 'next/server';

import { getPageContent } from '@/app/lib/markdown';

export async function GET(request, context) {
  const { params } = context;
  const slug = params.slug.toLowerCase();

  try {
    const content = await getPageContent(slug);
    return NextResponse.json({
      content,
    });
  } catch (err) {
    return NextResponse.json({
      error: `${err}`,
    });
  }
}

And its returning an error
{"error":"TypeError: attacher.call is not a function"}

The code was working intially fine, but i had to change how it parses the markdown to wrap items such as in a div that i could add a specific classname to. I ended up with this, and it broke.

I havent used remark, unified(), rehype etc. before and am unused to it, so i may have done some stupid mistake that is normally obvious that ive just overlooked, so if i have please let me know.

Furthermore, if you think the way ive chosen to wrap the pre block in a div is wrong/could be improved upon, please let me know, would be glad to learn how to improve. Thank you!

Why does javascript think my string is a number?

!isNaN("0x8d0adfd44b4351e5651c09566403e7edc586243dc0890f8e86c043a924a9592c")

The above results in TRUE, meaning javascript thinks this is a number.

I’ve also tried

!isNaN(parseFloat("0x8d0adfd44b4351e5651c09566403e7edc586243dc0890f8e86c043a924a9592c"))

which also returns TRUE.

Nodejs SOAP package ESLint errors (Unsafe) in strict typescript

Alright, I have a huge problem getting the soap package to work within strict typescript mode. I have tried to solve this issue using 4 AI’s. So, as a last resort I try StackOverflow.

The following script produces a bunch of ESLINT errors for line 6:

const client: Soap.Client = await Soap.createClientAsync(url);

and they just don’t seem to get away. Is it really impossible to create strict typescript soap clients?

I also tried changing catch (error: unknown) with catch (error: Error) without result.
Can you reproduce this problem and fix? I hope, thank you!

import * as Soap from 'soap';

  async fetchSoapData() {
    const url: string = 'http://example.com/wsdl?wsdl';
    try {
      const client: Soap.Client = await Soap.createClientAsync(url);
    } catch (error: unknown) {
      if (error instanceof Error) {
        console.error('Error:', error.message);
      } else {
        console.error('Unknown error:', error);
      }
    }
  }
Part of code Error
L6 const client Unsafe assignment of an error typed value.
L6 Soap.createClientAsync(url) Unsafe call of a(n) error type typed value.

How should I have a client identify another user?

I am currently working on a game build with Go and JS. Its supposed to be a multiplayer game in which you battle against other players. I am setting a structure and was wondering how I should store the identity of the other users they are engaging with on the client side of JS. I don’t want to use their internal IDs and assign another random unique identifier and saving it on each person is nearly the same as just using the IDs themselves.

So what is the industry standard for this?

I was thinking to maybe use something that the server temporarely encodes with another random string that only the server can than decipher which user is meant? Also something that is random between each session maybe? Or have it that user A and user B have different persistent encoded IDs of user C? I am really not 100% sure how to safely approach this?

Thanks for yalls help!

How to learn programming more efficiently [closed]

I’m a second-year IT student, and I’ve been having some trouble learning how to code because I tend to forget things easily.

Right now, I’m focusing on Python, HTML, CSS, and JavaScript since I’m really interested in web development. Could you give me some tips or strategies to learn programming more efficiently and retain what I learn better? Also, are there any other languages or technologies related to web development that I should consider learning.

Fullcalendar Mutation Observer Failing to Detect Deletion of DOM Element

I am modifying the text of the “More” links created by Fullcalendar V6. When those links are clicked, Fullcalendar re-renders its own text into the element, which is behind the popover div and cannot be seen. Once the popover is deleted, I need to again correct the text.

I have set listeners on the cancel button which work well, but if the user clicks anywhere outside the popover or taps the escape key, the popover is deleted without firing my Mutation Observer. If I set a timeout to delete the popover, the mutation observer correctly detects the change.

What am I missing?

(This project includes jQuery, but it’s not necessary)

$(document).on('click', 'a.fc-daygrid-more-link, .fc-popover-close', function(){
    // fullCalendar rerenders the More links when they are clicked, so we have to redo the fixes
    fixLinksOnCalendar(true);
    setTimeout(() => {
        var popoverID = $(this).attr('aria-controls');
        console.log('popover id='+popoverID );
        var elem = document.getElementById(popoverID)
        observeDeletion(elem);
        
        // this test deletion is detected as expected
        setTimeout(function(){
            console.log('Deleting the elem');
            $(elem).remove();
        }, 5000);
    }, 400);

});

function observeDeletion(elem){
    var target = elem.parentNode;
    console.log('observing target:', target);
    const calPopupObserver = new MutationObserver(function (e) {
        console.log('Generic Mutation:', e);
        if (typeof e[0].removedNodes === 'array' && e.removedNodes[0] === elem){
            console.log('Fixing Links');
            fixLinksOnCalendar(true);
            calPopupObserver.disconnect();
            return;
        }
    });
    calPopupObserver.observe(target, { childList: true, subtree: true, });
};  

Helmet Connect-Src CSP is present on Dev, but Not on Production?

Here’s my helmetOptions.contentSecurityPolicy.directives.connectSrc:

[
  "'self'",
  "http://myDomainName",
  "ws://myDomainName",
  "https://*.myServiceProvider_1.io/",
  "https://*.myServiceProvider_2.com/",
  "https://*.myServiceProvider_3.net/",
  "https://*.google-analytics.com/",
  "https://*.google.com/",
  "https://stats.g.doubleclick.net/",
  "https://analytics.google.com/",
  [.....]
]

It gets added to my site’s CSP and on local dev, everything works as expected.

In the Network tab of browser dev tools, you can see what the CSP looks like as received by the browser. Here’s what it is on local dev for connect-src:

[.....] connect-src 'self' http://myDomainName ws://myDomainName https://*.myServiceProvider_1.io/ https://*.myServiceProvider_2.com/ https://*.myServiceProvider_3.net/ https://*.google-analytics.com/ https://*.google.com/ https://stats.g.doubleclick.net/ https://analytics.google.com/ [.....];

But here’s what it looks like in production, hosted on AWS:

[.....]connect-src * 'self'; 

There’s nothing there — it’s all missing.

How can that be?

Most of the rest of my CSP is on the production site, but there are other anomalies in that CSP as well. I’m listing just this one in the hope that if I fix it, the others will also be fixed.

Note: it’s probably not relevant, but my build tool is Meteor.

Nodejs behavior with multiple I/O calls

I understand that nodejs uses libuv library for I/O tasks. By default libuv uses four threads for handling I/O tasks. I am trying to understand the behavior of libuv when more than four I/O tasks are scheduled. Does a thread a wait until it finishes reading assigned file before reading another file or does it switch between many unfinished files.

Below is code that logs “data events” and “end events” from multiple read streams

const fs = require('node:fs');
const path = require('node:path');

function getFilePaths(directory){
  let files = fs.readdirSync(directory)
  return files
  .map((file) => path.join(directory, file))
  .filter((filePath) => fs.statSync(filePath).isFile())
}

const directory = 'D:\test-folder';

const files = getFilePaths(directory)

const streams = []

files.forEach((file, index)=>{
  streams.push([index, fs.createReadStream(file)])
})

streams.forEach(([index, stream])=>{
  stream.on('data', () => {
    console.log(`Data: Stream ${index}`);
  });

  stream.on('end', () => {
    console.log(`End: Stream ${index}`);
  });

  stream.on('error', (err) => {
    console.error(`Error: Stream ${index}`);
  });

})

In the output I expected at least one of the first four files to be fully read before any data chunk is received for the remaining files. Instead it appears the libuv threads don’t wait until a file is fully read before starting reading another file.

Is this the expected behavior?

Console output showing data events from various streams

Test works on explicit string, but not a variable

I hope I am just overtired and someone will excoriate me for lump-headedness, but this is just bizarre:

Welcome to Node.js v20.18.1.
Type ".help" for more information.
> row = { tag: 'Key', title: 'Title' }
{ tag: 'Key', title: 'Title' }
> 
> values = Object.values(row)
[ 'Key', 'Title' ]
> /^w+$/.test(values[0])
false
> /^w+$/.test('Key')
true
> word = "Key"
'Key'
> /^w+$/.test(word)
true
> values[0] === word
false
> values[0]
'Key'
>

I used jschardet to verify that values[0] and word have the same encoding (ascii). This happens on node 18.20.3 as well as 20.18.1.