Alyvix is an open source APM software tool for visual monitoring
Build end-user bots visually interacting with any Windows application like ERPs or your favorite browser. Run and measure business-critical workflows like a human would, but continuously.
Measure end-user experiences: Alyvix records the click-to-appearance responsiveness of each transaction. Create reports on IT service quality to support technical and business actions.
Visually define end-user workflows: Alyvix Editor lets you build test cases in a visual way, interaction after interaction
Automate any GUI-based (even streamed) Windows application: Alyvix works by processing screen frames, without being hardwired to APIs
Run visual test cases that interact with a machine just as a real user would: Alyvix uses the mouse and keyboard just like a person does
Measure transaction time click-to-appearance: Alyvix measures how long each transaction takes to complete after the previous interaction
Record the availability and responsiveness of each transaction: Alyvix allows you to monitor the performance of end-user experiences
Alyvix provides demonstrable and indisputable proof with annotated screenshots whenever the visual response times out
The first step to automating end-user workflows is to visually define their transactions, and Alyvix Editor gives you all the tools you need. Just point and click to select GUI components from an application view and create a task step in your Alyvix test case. For each component you can set up its type (image, rectangle or text), its actions (mouse and keyboard) and its click-to-appearance time thresholds (warning, critical and timeout).
Having created the task steps, you can drag and drop them into the scripting panel in order to compose the desired end-user workflow, creating an Alyvix visual test case consisting of a sequence of visual transactions. Conditionals and loops help implement more complex logic.
The Alyvix Robot CLI tool runs visual test cases saved as .alyvix files: this command executes end-user bots that reproduce recorded end-user workflows. The resulting performance transaction measures are displayed both in the CLI and saved as human readable output files, with annotated screenshots that provide demonstrable and indisputable proof in case of failure.
The end goal is to visualize trends over time with dashboards showing your end-user workflow performance. The data for this is drawn from regularly and continuously scheduled test cases, integrating their output into your own monitoring system and, finally, analysing latency and downtime to assess IT service quality. Contact us if you need support in building test cases, or their integration or maintenance.