I’m working on performance testing with Grafana K6 and trying to structure my test scripts to avoid redundancy. So I’ve implemented a class-based approach. Here’s a sample code:
const metrics = new MetricsBuilder();
const k6Runner = new K6Runner();
export const setup = () => k6Runner.doSetup();
export const options = k6Runner.getOptions();
export default () => k6Runner.run();
The MetricsBuilder
class is designed to hold various metrics (including custom ones) and make them reusable across multiple test scripts. I want to keep the codebase clean, as there will be multiple test scripts and need to avoid redundancy.
However, I have a few concerns:
# Init Context and SharedArray Behavior
According to the documentation, the init context
runs once per Virtual User (VU), while a SharedArray
is only initialized once and shared across all VUs. I’ve noticed that the MetricsBuilder
class gets initialized for every VU in the init context, but I don’t see any significant difference in the reports. Does K6 handle it appropriately?
# Is it a bad practice to use a class like MetricsBuilder in this context?
Does initializing the same object multiple times in the init context
(one per VU) have any negative impact on performance or results?
# Initialization of Classes in the Init Context
I’ve noticed that initializing a class like K6Runner
in the init context
(e.g., const k6Runner = new K6Runner();
) results in the same number of instances being created as the number of VUs.
# Is this approach considered a bad practice?
Would this lead to any issues, such as resource overuse or unexpected side effects?
# Constructor Approach with SharedArray
If we pass multiple datasets from different SharedArray
instances to a constructor in the K6Runner
, are there any known issues or limitations? I haven’t encountered any problems so far, but I want to ensure this is a safe practice in K6.
# Best Practices for Class-Based K6 Scripts
Are there any best practices for using a class-based approach in K6 scripts for managing metrics and modularizing test execution logic?
Would appreciate any insights or recommendations for structuring large-scale k6 test scripts in a way that avoids redundancy while maintaining performance and accuracy. Thanks in advance.