If a spot is red, which indicates it has one or more failed tes
Pulse reports provide details of test calls for a complete view of the IVR performance.
There are two major categories of reporting available: Dashboard and Custom.
Dashboard Reports are in dashboard format, which provides management with summary views that allow you to drill down for more granular reporting. The data from the reports can be used to benchmark system performance, analyze and track problems, drive continuous improvement, and enforce service level agreements.
The Custom Reports feature allows you to create your own report with optional filters that can be scheduled and the results emailed to you periodically.
Watch the video below for an example walkthrough of analyzing the results of a Pulse Report
Pulse Dashboards Introduction Video
Watch the video below for an introduction into Cyara Pulse Dashboards and how these can be utilized to monitor your organizations CX.
Pulse Dashboards
The Pulse Dashboard is an extension to the Platform Monitoring solution, allowing an easy-to-read, visual representation of relevant Test Case results from current Pulse Campaigns. This report is configured to show a series of Pulse test results aggregated into one or more service groups.
The Interface is designed to run on large Contact Center screens. Each display of results is based on a Service Group (which is linked to one or more Test Cases in a Campaign).
The Pulse Dashboard is accessible by Users assigned to the Dashboard role. The workflow of the Pulse Dashboard is as provided below:
Pulse Dashboard
The Pulse Dashboard has been designed to run on a large screen (1920 x 1080). As such, some of the font sizes and colors have been chosen with this in mind. Currently, the Pulse Dashboard comes in two themes: light and dark.
To access the Pulse Dashboard, from the main menu, click
The light theme is accessed with a URL similar to:
https://www.cyaraportal.com/CyaraWebPortal/Dashboard/1
When the page first loads, it invokes the REST API to load sufficient results to satisfy the Initial Load Data parameter.
The number of results that make up a spot is determined by the Service Group configuration (either by the number of Test Cases or by time range).
One or more Service Groups can be displayed on the Dashboard.
The summary columns to the right of the Spots (last hour, today, yesterday) identify the percentage of successful calls made (includes satisfactory).
Each Spot refers to the execution of a Test Case (or Test Cases) from a Pulse Campaign, based on a specific category:
- Green spot = successful
- Red spot = failure (error)
- Orange spot = satisfactory
When automatic retries are triggered after a Category failure, the spot appears
with the retry icon. For example, a successful Test Case after a retry is shown
as . Similarly, a Test Case with satisfactory results
after retries is shown as
.
When a spot is selected, the Test Case result and timestamp when the Campaign ran is displayed. Click on the Test Case link to open a validation page in a separate tab in your Web browser
If the currently logged on user belongs to the Dashboard role (and doesn’t have Reporting), then the Test Result Details page may mask certain replies. This page will mask the Reply of any Steps that are successfully matched with the expression defined by the Mask Reply Expression.
For more information about Mask Reply Expression, see Configuring and Customizing the Pulse Dashboard.
If a spot is red, which indicates it has one or more failed test results, then a Platform Administrator can right-click over the spot and choose the Force Success option.
This forces the result to successful and reloads all current Dashboards for the Account. Any services that have been attached to this Test Case and show this result will now indicate a successful result, with the slot’s spot turning green.For each Service Group, generic categories are pre-defined to display various aspects of a Test Case result:
- Answering
- - Green spot = the call answered correctly (i.e. Step 0 was successful)
- Correct Prompts
- - Green spot = the speech recognition results in all Test Case steps were above the Minor Confidence Threshold.
- Responsive
- - Green spot = the response time for (post Step 0) prompts within a Test Case step were less than the Minor Threshold value.
- Completed
- - Green spot = all the Test Cases steps completed successfully without any errors.
Additional Category Groups can be created to identify specific step(s) within a
Test Case. For example, ID verifications
Test Result Details
Service Groups can be setup with one or more Test Cases.
A group with multiple Test Cases shows the aggregated result in one dot (below) or as separate dots per Test Case (per Category)
- The highest severity of the
multiple test results will always be displayed as shown:
If a Test Case is used in multiple running Pulse Campaigns, you may see
duplicate entries per dot:
One or more Service Groups can be displayed on the Dashboard.
- Groups are linked to Test Case IDs (not Campaigns).
Quality Metrics
The dashboard displays certain incident recovery metrics for each Voice Service Group on the Dashboard that reveal the performance of the infrastructure.

- Uptime: This value is the percentage of uptime of the Voice Service Group in the last 30 days. Uptime is calculated over 30 days, including the number of downtime periods comprised 3 or more consecutive voice test results where the “Answering” category failed.
- MTTR: Mean Time To Restore (MTTR) measures how long it takes to resolve an incident and restore service over the last 30 days. This reveals the efficiency of the incident resolution process.
- MTTI: Mean Time To Identify (MTTI) measures how long it takes to identify a failure over the last 30 days. This reveals how quickly we are alerted to issues on the Platform.
- MTBF: Mean Time Between Failures (MTBF) measures the average time between recoverable failures for the last 30 days. This reveals the potential frequency of future failures.
Comments
0 comments
Please sign in to leave a comment.