Select2 inputs configuration after AJAX/HTMX form insert in bootstrap modal

I have some select2 inputs $(".select2") on page for results filtering. After AJAX (get) call I need to insert form $("updateForm") received from response to bootstrap modal $("#updateProductModal"). Everything works fine except select2 dropdowns which are not aligned with input element. After some research I’ve founded out that is a common issue and to fix that I need add dropdownParent to .select2() init.

I’ve added some code on success response:

success: function (response) {
    if ($method == "POST") {
        location.reload();
    } else {
        var $form = $(response.content);
        $("#updateProductModal .modal-body").html($form);
        // attaching select2 to modal
        $("#updateProductModal .select2").select2(
            placeholder: "Your input here", // for testing purpose
        );            
    };
}, 

However no placeholders are placed. I suspect that previously initialized config for $(".select2").select2(someConfig) will ignore another select2() call.
Is there any solution for configure new bunch of inputs contained in form with additional parameters?

GSAP scaleX: 1 animation reverts to scale: none in DOM

I’m trying to animate an element’s width from scaleX(0) to scaleX(1) using GSAP. Here’s my setup:

Initial CSS: The element starts with transform: scaleX(0); in CSS:

.line {
    background-color: rgba(39, 39, 39, 0.3);
    max-width: 1360px;
    height: 1px;
    transform: scaleX(0);
    transform-origin: center;
}

GSAP Animation Code: I use GSAP to animate the element to scaleX(1) upon scrolling, with a slight delay using setTimeout:

setTimeout(() => {
  gsap.to(".line-wrapper .line", {
    scaleX: 1,
    scrollTrigger: {
      trigger: ".line-wrapper",
      start: "top bottom",
      scrub: false,
      markers: false,
    },
    duration: 1.2
  });
}, 100);

Even though I set scaleX: 1 in GSAP, the resulting DOM shows

translate: none;
rotate: none;
scale: none;
transform: translate(0px, 0px);

and the element does not animate as expected. I also tried transform: “scale(1, 1)” in GSAP and transform: scale(0, 0) in css, but it didn’t work either

How can I make sure GSAP animates from scaleX(0) to scaleX(1) and reflects the correct scaleX value in the DOM? Also, why does the DOM revert to scale: none despite the GSAP setting?

Is there a way to fold all sections of code found by a search in Visual Studio Code?

I would like to fold all JS doc comments within my code. I just found that folding all comments can be done with Ctrl+K Ctrl+/. Yet I am still wondering if there is a way to fold all occurrences in the code of a searched phrase like /**.

I have searched for “vscode fold all occurrences of a searched phrase”, and found this reference page that mentions:

Fold All Block Comments (Ctrl+K Ctrl+/) folds all regions that start with a block comment token.

This folds all comments, including the JS doc comments. However, this still does not allow the folding of code segments that match a search phrase.

(In jQuery) how can one handle racing between two callbacks on the same event that can be triggered multiple times?

I have two callbacks that need to be put on the same change event on the same item. For reasons that are not worth going into, I need to have these callbacks in separate on, I cannot unify them under a single on call. So I need these on calls to stay separate, for example:

$('body').on('change', '#my-div', function1);
$('body').on('change', '#my-div', function2); 

Now, I have an AJAX call inside function1. I would like for function2 to always execute after the execution of function1 is done, including after an AJAX response has been received inside function1. I asked ChatGPT and it advised me to use $.Deferred():

let function1Deferred = $.Deferred();
function function1() {
    function1Deferred = $.Deferred();
    $.ajax({
        url: 'your-url', // Replace with your URL
        method: 'GET',
        success: function(data) {
            console.log("Function1 AJAX request completed");
            function1Deferred.resolve(); 
        },
        error: function() {
            function1Deferred.reject();
        }
    });
}

function function2() {
    function1Deferred.done(function() {
        console.log("Function2 is now executing after function1 has completed.");
    });
}

To my mind, though, this only solves the problem the first time the change event gets triggered, because there’s no guarantee that function1 will run first, before function2 – and therefore there’s no guarantee that, on the second and third etc. trigger of the change event, the line function1Deferred = $.Deferred(); inside function1 will run before function2 runs, so function2 might well be the first to run, and run to the end, on the subsequent triggers of the change event.

Am I missing something, is the code actually achieving what I want and I’m just missing something about how Deferred works under the hood? If yes, what am I missing? If not, how can I solve my problem and ensure function2 always runs after function1 on subsequent triggers of the event, not just on the first one?

I just want to emphasize again that I need the calls of the on function to stay separate, I cannot use a single on('change', '#my-div', newFunction) call, this is not a solution for my problem in its larger context.

Instead of setting maxSupportedTransactionVersion as 0 why am I getting this error?

I’m trying to verify a Solana transaction using the QuickNode RPC endpoint, but I’m getting the following error:

Starting verification process...  
Amount: 0.001  
Error during transaction verification: SolanaJSONRPCError: failed to get transaction: Transaction version (0) is not supported by the requesting client. Please try the request again with the following configuration parameter: "maxSupportedTransactionVersion": 0  
at Connection.getParsedTransaction (D:[email protected]:7456:13)  
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)  
at async verifyDepositTransaction (file:///D:/Project1-proto/backend/utils/transaction.utils.js:19:25)  
at async depositWithPhantom (file:///D:/Project1-proto/backend/controllers/transactions.controller.js:16:21)  

Here is the code I’m using for verification:

export async function verifyDepositTransaction(pubkey, amount, signature) {  
console.log('Starting verification process...');  
console.log('Public Key:', pubkey);  
console.log('Amount:', amount);  
console.log('Signature:', signature);  

const connection = new Connection(process.env.SOLANA_RPC_ENDPOINT, {  
maxSupportedTransactionVersion: 0,  
commitment: 'confirmed'  
});  

try {  
const transaction = await connection.getParsedTransaction(signature, 'confirmed');  
console.log('Fetched Transaction:', JSON.stringify(transaction, null, 2));  

if (!transaction || !transaction.transaction || !transaction.transaction.message) {  
console.error('Transaction not found or invalid format.');  
return false;  
}  

const fromPubkey = new PublicKey(pubkey);  
const toPubkey = new PublicKey(process.env.COMPANY_SOL_ADDRESS); // Replace with your company wallet address  
const lamports = parseFloat(amount) * 1e9; // Convert SOL to lamports  

console.log('From Public Key:', fromPubkey.toString());  
console.log('To Public Key:', toPubkey.toString());  
console.log('Lamports:', lamports);  

const instructions = transaction.transaction.message.instructions;  

// Ensure instructions array exists and is not empty  
if (!instructions || instructions.length === 0) {  
console.error('No instructions found in the transaction.');  
return false;  
}  

console.log('Instructions found:', instructions.length);  

// Iterate through the instructions to find a match  
const isValid = instructions.some((instruction, index) => {  
console.log(`Checking instruction ${index + 1} of ${instructions.length}`);  
console.log('Instruction:', JSON.stringify(instruction, null, 2));  

if (!instruction || !instruction.programId || !instruction.parsed) {  
console.error('Instruction is missing required fields.');  
return false;  
}  

const parsedInfo = instruction.parsed.info;  

// Check that the program ID is SystemProgram, and the sender and receiver addresses match  
const matches = instruction.programId.equals(SystemProgram.programId) &&  
parsedInfo.source === fromPubkey.toString() &&  
parsedInfo.destination === toPubkey.toString() &&  
parsedInfo.lamports === lamports;  

console.log('Instruction match result:', matches);  
return matches;  
});  

console.log('Transaction validation result:', isValid);  
return isValid;  

} catch (error) {  
console.error('Error during transaction verification:', error);  
return false;  
}  
}

I am getting this error which is really irritating , asked every AI about it and got no solution . The RPC endpoint I am using is of Quiknode . My solana library is up to date so that’s not the root of the error . Fingers crossed :).

How to Limit a Select List Filter to Search Based on Exact Matches Instead of Substrings

I am working on a project where I have a select list with a filter functionality. Currently, the filter searches for substrings within the options, but I would like to modify it so that it only searches for exact matches based on the selected data.

For example, if the select list of column index 2 contains the options “New” , I saw rows with “New” and some rows with “NRND (Not Recommended for New Design)” , the filter should not return any results of “NRND (Not Recommended for New Design)” because is not an exact match for “New”, it was a substring of “New”.

columns = jQuery.parseJSON(data);
table.columns([38]).every(function (i) {     
  var column = this;
  $(column.header()).append("");
  var select = $('<select style="max-width:80px"><option value="">--</option</select>')
    .appendTo($(column.header()))
    .on('change', function ({
       filter_values[column.index()] =$(this).val();
       table.search(filter_values.toString() ? '' + filter_values.toString() + '' : '', true, false).draw();
    });
    select.append('<option value="0">0.X</option>');
    select.append('<option value="1">1.X</option>');
    select.append('<option value="2">2.X</option>');
    select.append('<option value="3">3.X</option>');
});

$(column.header()).append("<br>");
var select = $('<select style="max-width:80px"><option value="">--</option></select>')
  .appendTo($(column.header()))
  .on('change', function () {
    var selectedValue = $(this).val();
    filter_values[column.index()] = selectedValue;

    // Use the search method to filter and display the data
    table.search(filter_values.toString() ? '' + filter_values.toString() + '' : '', true, false).draw();
  });

  // Populate the select options
  columns[column.index()].forEach(function (d, j) {
    select.append('<option value="' + d + '">' + d + '</option>');
  }); 
});

Error when app loads navigate before mounting the Root Layout component

I have this structure in my React Native with expo router at my app/_layout.tsx level. Below is my code. I am trying to fix the issue and error that I am getting when the app loads. Essentially I am trying to check if use is logged in, if not take them to log in screen or to HomeScreen (app/(tabs)

// app/_layout.tsx

SplashScreen.preventAutoHideAsync();

export default function RootLayout() {
  const colorScheme = useColorScheme();
  const [loaded] = useFonts({
    SpaceMono: require("../assets/fonts/SpaceMono-Regular.ttf"),
  });
  const [isLoggedIn, setIsLoggedIn] = useState<boolean | null>(null);
  const router = useRouter();

  useEffect(() => {
    async function initializeAuth() {
      const loggedIn = await checkAuthStatus();
      setIsLoggedIn(loggedIn);
      SplashScreen.hideAsync();
    }
    initializeAuth();
  }, []);

  useEffect(() => {
    if (isLoggedIn === false) {
      router.replace("/(auth)/LoginScreen");
    }
  }, [isLoggedIn]);

  if (!loaded || isLoggedIn === null) {
    return null;
  }

  return (
      <GestureHandlerRootView style={{ flex: 1 }}>
        <ThemeProvider value={colorScheme === "dark" ? DarkTheme : DefaultTheme}>
          <Stack screenOptions={{ headerShown: false }}>
            <Slot />
          </Stack>
        </ThemeProvider>
      </GestureHandlerRootView>
  );
}

authUtils.js
import * as SecureStore from "expo-secure-store";

// Utility to check if a user token is saved
export async function checkAuthStatus() {
    const token = await SecureStore.getItemAsync("authToken");
    return !!token;
}

// Utility to save token
export async function setAuthToken(token) {
    await SecureStore.setItemAsync("authToken", token);
}

// Utility to clear token
export async function clearAuthToken() {
    await SecureStore.deleteItemAsync("authToken");
}

How does Astro Close Database Connections?

I am not sure on how or if Astro closes database connections upon retrieval of data. Is it that the packages themselves take care of this or the connections “just” drop and it is up to the database server at the other end to timeout and close them?

Examples: Supabase as a backend service: https://docs.astro.build/en/guides/backend/supabase/ do not see any closing of database connections. Same on Astro DB: https://docs.astro.build/en/guides/astro-db/
plus in a codebase I saw with Mongo (not official): https://github.com/skolhustick/astro-mongodb/blob/main/src/lib/mongodb.js where for example the user retrieval here: https://github.com/skolhustick/astro-mongodb/blob/main/src/pages/users/index.astro “only” retrieves but never closes the connection.

I see code like this everywhere:

---
import { getAllUsers } from "../../lib/users";
import Layout from "../layouts/Layout.astro";
const users = await getAllUsers();
if (!users) {
  return new Response(null, {
    status: 404,
    statusText: "Not found",
  });
}
---

/* and in users.js */
import { Users } from "./mongodb";

export const getAllUsers = async () => {
  const users = await (await Users()).find({}).toArray();
  return users;
};

export const createUser = async (newUser) => {
  const user = await (await Users()).insert(newUser);
  return user;
};

So yes, my question is this: does Astro “never” close connections? Should it? Should developers add something, or do the underlying npm packages take care of it when the process ends?

React JS deleting nested comments

I am studying nested comments functionality in React JS. I have trouble deleting nested comments. Basically every comment has a property replies, which is array of objects (exactly same comment object as the parent ones).

When I try to update the parent state in the inner loop (updateCommentsData(parentStateCopy);) with copied and modified array of objects it throws me this error:
updateCommentsData is not a function even though modified array of objects seems to be correct when console logging it.

Delete function:

function deleteComment(id) {
for (let i = 0; i < commentsData.length; i++) {
  if (commentsData[i].id === id) {
    updateCommentsData((prev) => {
      return prev.filter((item) => item.id !== id);
    });
    break;
  } else {
    if (commentsData[i].replies.length !== 0) {
      for (let j = 0; j < commentsData[i].replies.length; j++) {
        repliesStateCopy.push(commentsData[i].replies[j]);
        if (commentsData[i].replies[j].id === id) {
          repliesStateCopy = repliesStateCopy.filter(
            (item) => item.id !== id
          );
          parentStateCopy[i].replies = repliesStateCopy;
          updateCommentsData(parentStateCopy);
          break;
        }
      }
    }
  }
}

JSON Data looks like that:

"comments": [
{
  "id": 1,
  "content": "Impressive! Though it seems the drag feature could be improved. But overall it looks incredible. You've nailed the design and the responsiveness at various breakpoints works really well.",
  "createdAt": "1 month ago",
  "score": 12,
  "user": {
    "image": {
      "png": "./images/avatars/image-amyrobson.png",
      "webp": "./images/avatars/image-amyrobson.webp"
    },
    "username": "amyrobson"
  },
  "replies": []
},
{
  "id": 2,
  "content": "Woah, your project looks awesome! How long have you been coding for? I'm still new, but think I want to dive into React as well soon. Perhaps you can give me an insight on where I can learn React? Thanks!",
  "createdAt": "2 weeks ago",
  "score": 5,
  "user": {
    "image": {
      "png": "./images/avatars/image-maxblagun.png",
      "webp": "./images/avatars/image-maxblagun.webp"
    },
    "username": "maxblagun"
  },
  "replies": [
    {
      "id": 3,
      "content": "If you're still new, I'd recommend focusing on the fundamentals of HTML, CSS, and JS before considering React. It's very tempting to jump ahead but lay a solid foundation first.",
      "createdAt": "1 week ago",
      "score": 4,
      "replyingTo": "maxblagun",
      "user": {
        "image": {
          "png": "./images/avatars/image-ramsesmiron.png",
          "webp": "./images/avatars/image-ramsesmiron.webp"
        },
        "username": "ramsesmiron"
      },
      "replies": []
    },
    {
      "id": 4,
      "content": "I couldn't agree more with this. Everything moves so fast and it always seems like everyone knows the newest library/framework. But the fundamentals are what stay constant.",
      "createdAt": "2 days ago",
      "score": 2,
      "replyingTo": "ramsesmiron",
      "user": {
        "image": {
          "png": "./images/avatars/image-juliusomo.png",
          "webp": "./images/avatars/image-juliusomo.webp"
        },
        "username": "juliusomo"
      },
      "replies": []
    }
  ]
},

ApolloError: Server response was missing for query ‘CreateChatbot’

I have successfully run my both front in nextJs and also run server stepzen start command

here is my serverClient.ts file

import {
  ApolloClient,
  DefaultOptions,
  HttpLink,
  InMemoryCache,
} from "@apollo/client";

const defaultOptions: DefaultOptions = {
  watchQuery: {
    fetchPolicy: "no-cache",
    errorPolicy: "ignore",
  },
  query: {
    fetchPolicy: "no-cache",
    errorPolicy: "all",
  },
  mutate: {
    errorPolicy: "all",
    fetchPolicy: "no-cache",
  },
};

export const serverClient = new ApolloClient({
  ssrMode: true,
  link: new HttpLink({
    uri: process.env.NEXT_PUBLIC_GRAPHQL_ENDPOINT,
    headers: {
      Authorization: `Apikey ${process.env.GRAPHQL_TOKEN}`,
    },
    fetch: typeof fetch !== "undefined" ? fetch : undefined,
  }),
  cache: new InMemoryCache(),
  defaultOptions,
});

here is apolloClient.ts file:

import {
  ApolloClient,
  DefaultOptions,
  InMemoryCache,
  createHttpLink,
} from "@apollo/client";

export const BASE_URL =
  process.env.NODE_ENV !== "development"
    ? `https://${process.env.NEXT_PUBLIC_VERCEL_URL}`
    : "http://localhost:3000";

const httpLink = createHttpLink({
  uri: `${BASE_URL}/api/graphql`,
});

const defaultOptions: DefaultOptions = {
  watchQuery: {
    fetchPolicy: "no-cache",
    errorPolicy: "ignore",
  },
  query: {
    fetchPolicy: "no-cache",
    errorPolicy: "all",
  },
  mutate: {
    errorPolicy: "all",
    fetchPolicy: "no-cache",
  },
};

const client = new ApolloClient({
  link: httpLink,
  cache: new InMemoryCache(),
  defaultOptions: defaultOptions,
});

export default client;

here is my mutations:

import { gql } from "@apollo/client";

export const CREATE_CHATBOT = gql`
  mutation CreateChatbot(
    $clerk_user_id: String!
    $name: String!
    $created_at: timestamptz!
  ) {
    insertChatbots(
      clerk_user_id: $clerk_user_id
      name: $name
      created_at: $created_at
    ) {
      id
      name
      created_at
    }
  }
`;

and here is handle where I have a simple input filed that I want to save my data inside neondb database and I have called this api using IBM API connection essential

  const { user } = useUser();
  const [name, setName] = React.useState("");
  const router = useRouter();

  const [createChatbot, { data, loading, error }] = useMutation(
    CREATE_CHATBOT,
    {
      variables: {
        clerk_user_id: user?.id,
        name: name,
        created_at: new Date().toISOString(),
      },
    }
  );

  const handleSubmit = async (e: FormEvent) => {
    e.preventDefault();
    try {
      console.log("Creating Chatbot...");
      const data = await createChatbot();
      setName("");
      router.push(`/edit-chatbot/${data.data.insertChatbots.id}`);
    } catch (error) {
      console.error(error);
    }
  };

It’s look like I have done everything correctly, how it’s not save any data in the database always show me an error like this ApolloError: Server response was missing for query 'CreateChatbot'

Move GB files from one prefix to another in S3 using Lambda

I weekly run a lambda function triggered by an Eventbridge rule to move files from a S3 prefix (landing) to a different prefix in the same bucket (hist). The lambda has a timeout of 5 minutes.

It works as expected for small files, but when it comes to objects of 2-3 GB, the lambda takes already 5 minutes to copy and delete each file. It doesn’t show a timeout exception, but it seems to move and delete one file, end, and then run again the same code for the next file, which takes again 5 minutes. It indeed moves the files but the last step of sending a notification is never executed.

This behavior is quite unexpected. Do you know what would be the correct way of handling this? I am thinking of increasing the lambda memory until finding the optimal.

Here is the code and a screenshot of the logs for a better idea.

TIA

export async function handler() {

    console.log("Moving objects from ", sourcePrefix, " to ", destinationPrefix)

    var today = new Date();
    var daysSinceSunday = today.getDay(); // Sunday is 0, Monday is 1, ...

    // Sunday from 2 weeks ago
    var twoSundaysAgo = new Date(today)
    twoSundaysAgo.setDate(today.getDate() - daysSinceSunday - 7);
    twoSundaysAgo.setHours(23, 59, 59, 999)

    let messageText = `:bucket: *Historical Data Archiver ${twoSundaysAgo.toISOString().slice(0, 10)}* n` +
    `*Bucket:* `${bucketName}`n` +
    `*S3 Prefix Source:* `${sourcePrefix}`n` +
    `*S3 Prefix Destination:* `${destinationPrefix}`n`

    const webhookUrl = await getSecretValue(secretNameWebhook, keyNameWebhook)

    try {

        const listObjectsCommand = new ListObjectsCommand({
            Bucket: bucketName,
            Prefix: sourcePrefix,
        })

        const listedObjects = await s3.send(listObjectsCommand)

        if (!listedObjects.Contents) {
            console.log('0 objects eligible for archival')
            messageText += `:eight_spoked_asterisk: *Status*: 0 objects eligible for archivaln`
        } else {
            let numObjects = 0
            //Loop objects
            for (const object of listedObjects.Contents) {
                const objectDate = new Date(object.LastModified)
                //Remove objects from two Sundays ago and before
                if (objectDate <= twoSundaysAgo) {
                    const copyCommand = new CopyObjectCommand({
                        Bucket: bucketName,
                        CopySource: `${bucketName}/${object.Key}`,
                        Key: object.Key.replace(sourcePrefix, destinationPrefix),
                    })
                    //copy to hist path
                    await s3.send(copyCommand)

                    const deleteCommand = new DeleteObjectCommand({ Bucket: bucketName, Key: object.Key })
                    //delete from origin path
                    await s3.send(deleteCommand)

                    console.log('Successfully moved object:', object.Key)

                    numObjects += 1
                }
            }

            //0 objects were moved
            if(numObjects == 0){
                messageText += `:eight_spoked_asterisk: *Status*: 0 objects eligible for archivaln`
            }else{
                messageText += `:white_check_mark: *Status*: Successfully moved `${numObjects}` objects.n`
            }
        }
        //Everything went smoothly, send message to slack
        const fallbackMessage = `New transition objects from landing to hist S3`
        await sendTextToSlackBasic(messageText, fallbackMessage, green, webhookUrl)
    } catch (error) {
        console.error(`Error moving objects: ${error}`)
        messageText += `:x: *Error*: `${error.message}`n`
        const fallbackMessage = `Error hist S3: ${error.message}`
        await sendTextToSlackBasic(messageText, fallbackMessage, red, webhookUrl)
        throw error
    }
}

enter image description here

Why don’t Javascript object track the count of it’s key?

There is already a size object for JS Map class but for object there is no property like size. Currently I am using Object.keys function in a for loop just to find out the length. Since Object.keys is O(n) or even O(n logn) what is preventing browsers from implementing function like say Object.keyCount which gives the number of keys in the JS object with an internal counter. I am just curious why is it not still there?