My current workflow resizes/upscales images using Image Magick inside of an AWS Lambda function.
It uses the Image Magick layer, and the code is written in node.js as follows:
// Step 1: Download the image from S3
console.log("Downloading image from S3 bucket:", bucket, "with key:", key);
const inputImagePath = path.join(os.tmpdir(), 'input.jpg');
const outputImagePath = path.join(os.tmpdir(), 'resized-image.jpg');
const downloadImageFromS3 = async () => {
const getObjectCommand = new GetObjectCommand({ Bucket: bucket, Key: key });
const s3Object = await s3Client.send(getObjectCommand);
await pipeline(s3Object.Body, fs.createWriteStream(inputImagePath));
};
await downloadImageFromS3();
console.log("Image downloaded successfully!");
// Step 2: Resize the image using ImageMagick
console.log(`Resizing image to ${desiredWidth}x${desiredHeight}...`);
const command = `convert ${inputImagePath} -adaptive-resize ${desiredWidth}x${desiredHeight}! -density 300x300 ${outputImagePath}`;
execSync(command);
console.log("Image resized successfully!");
Since the images get upscaled to fairly large sizes (6000+ pixels for either the width or height), it takes the operation about 10 seconds to complete.
Things I have tried:
-
I’ve tried increasing the memory allocation of the Lambda function to the currently-max-allowable 3008 MB. (I have also requested that my account be given access to the absolute maximum 10,280 MB level as well, to see if that will help speed it up.) “Max Memory Used” caps out at a surprisingly low level, around 300-500 MB. Is there any way I can somehow force it to use more CPU resources to process it faster? Like some kind of Image Magick and/or AWS Lambda command that explicitly says “use as much CPU power to do this as fast as possible”?
-
Compressing the images in that same convert command, to see if it would return the output faster. This had no apparent impact on speed.
-
Using a method that is not adaptive resize. “-resize” does execute about 2x faster. BUT the final image quality is poor — and the very best image quality is required for this workflow (and I’ve found “adaptive resize”, by far, produces the best-looking outputs.)
Other ideas I have:
-
Since I need to resize each image to several slightly-different aspect ratios, instead of doing it sequentially in a loop, could I instead write the code to execute all of these different operations simultaneously — so instead of doing 3 in a row that takes 30 seconds total, all 3 can execute and finish at roughly the same time and therefore only take 10 seconds total?
-
Is there a way to just avoid the “download my image” step altogether? Any way I can just have the upscaling operation act directly on the image that’s already in my S3 bucket? Like, have the Lambda function somehow pipe directly in there to just act on it instead of that intermediate download step where it gets converted to image data and that gets passed to Image Magick?
Any ideas/tips would be HIGHLY appreciated. Thank you!