A question we’ve been asked countless times: “Why don’t my reporting stats match up exactly, across my Genesys Reporting Solutions?”. With a range of Genesys solutions which could be utilized across the CIM Platform, it’s sometimes not so obvious where their similarities and differences might be. Here’s a few pointers, for those who are feeling the pain;
1. Different solutions count different Objects in different ways. It may be a bit confusing, because they might use the same “Object Name” or “Statistic Name” or “Time Period” – but the solutions themselves were built for different purposes. Some look at workflow or workforce requirements, some count volumes, others count interactions and some count legs of interactions. Overlying those basic elements are the “Count” and “Average” and “Maximum” or “Minimum” rules, which may not be the same either!
2. Bear in mind that Genesys is not a single “Solution” – it is a Platform. It can contain many different Solutions which were built with specific tasks in mind. Ask yourself if you are using those Solutions for the individual purposes they were intended – or if you are trying to manipulate data sets from different Solutions purely because you think you should. The volumes, counts, statistics, averages, calculations and algorithms within each Solution are applied to thousands – sometimes millions – of units for measurement, so there are bound to be some small scale differences.
3. The majority of statistics are extracted from “Source” via a CTI Link (TServer) or an IServer for multimedia events – or a similar, direct link to that “Source”. So, you might think they should all be the same, right? They are not. Here are a few examples;
- A time slice (example: 09:15 – 09:30) in one reporting solution may include a count from the previous time slice, because an interaction began in that earlier time slice. In another reporting solution, the same interaction count may be reported in second time slice (example: 09:30 – 09:45), because that is when the interaction ended. This can cause differences between the counts in both solutions – but they are not discrepancies.
- “Average” in one solution may use pure mathematics and include decimals (example: 12.56384561 seconds) – another might round up the figure (example: 12.6 seconds). Applying the supplied average to either solution and comparing them will show a difference – not a discrepancy.
- “Object” in one solution may contain a different group of elements, such as an Agent Group – or it might be a Group of Agents. They sound similar, right? They are not. An Agent Group is defined on the CIM Platform, but a Group of Agents can be user-defined within a specific report. This means that a pre-set group may be different to an ad hoc grouping.
- “Objects” can also be dynamic; an Agent Group may have contained 12 Agents at the start of the reporting period, but it is possible to remove/add Agents from an Agent Group through CIM and/or Genesys Administrator on the fly, so it’s possible that you have included or excluded stats from Agents who were removed/added/moved to another Agent Group during that time period – and this is equally true of Skills/Skill Levels in reporting.
4. There isn’t a “matrix of comparable units” for the statistics across Solutions, because they generally serve different purposes;
- Call Center Analyzer (CCA), Call Center Pulse + (CCP+), Pulse and Interaction Workspace (IWS) generally count Call Volumes.
- Interaction Insights and Infomart (GI2) generally counts voice, eServices and other types of Interactions.
- WfM generally counts Required Work Effort using it’s own proprietary interface and algorithms, generally counting Interaction Legs, rather than full Interactions.
5. As a rule of thumb and as an Industry Best Practice, you should allow for up to 3% differential between reporting solutions. You might be looking for everything to be mathematically equal and – whilst it may be very satisfying to see that – it will probably never be that accurate. Give yourself a break and allow for this differential as a standard, acceptable measure!
SOFTEL have had a lot of experience over the years, providing guidance and rationale to what might appear to be a discrepancy within Genesys reporting elements. If you’ve found that your stats don’t quite match up the way you expect – and you can’t work out why – why not give us a nudge?