Tag: Six Sigma

  • Pillar Analysis: What an AHT Defect Reveals About Organizational Structure and Human Behavior

    My doctoral research is built on three pillars. Leadership culture, organizational structure, and human behavior. The working thesis examines how these three variables independently and collectively contribute to the 80 percent of cybersecurity breaches attributed to human error. But before I can apply that framework to cybersecurity I need to demonstrate that I understand how these pillars interact in any organizational context. This post does exactly that using a technical support AHT scenario as the laboratory.

    THE SCENARIO IN BRIEF

    A technical support team of 25 agents across two shifts was missing its 8 minute Average Handle Time target. Over 45 days, 4,219 out of 18,750 interactions exceeded the target. When measured by shift the evening team was operating at 1.80 sigma while the morning team operated at 2.68 sigma. The evening shift handled 40 percent of total volume but produced 68 percent of all defects.

    PILLAR ONE: ORGANIZATIONAL STRUCTURE

    The structural failure in this scenario existed before a single call was ever answered. The evening shift was staffed with 10 agents, three of whom had been hired within the last 60 days, against a morning shift of 15 agents averaging 2.3 years of tenure. That structural imbalance meant the evening shift was being asked to perform at the same standard as the morning shift without the organizational support, tenure depth, or staffing density to make that realistic.

    Organizational structure does not just mean org charts and reporting lines. It means how resources are distributed, how teams are built, and whether the design of the organization makes success possible or quietly makes failure inevitable. In this case the structure made failure statistically predictable before anyone looked at a single performance metric.

    This is directly relevant to cybersecurity. Organizations that distribute security responsibilities unevenly across teams, that staff security functions inadequately, or that design processes without accounting for human capacity constraints are building structural vulnerability into their security posture before a single threat ever arrives.

    PILLAR TWO: HUMAN BEHAVIOR

    The three customer reported reasons for long calls reveal the human behavior dimension of this scenario precisely. Complex issues requiring escalation suggest agents lacking the confidence or knowledge to resolve problems independently. Multiple holds suggest agents navigating uncertainty in real time rather than arriving at calls prepared. Customers repeating information suggests agents not documenting interactions thoroughly enough to maintain continuity.

    None of these are character flaws. They are behavioral patterns produced by insufficient preparation, inadequate tools, and unclear process expectations. Human behavior in organizational contexts is rarely random. It is shaped by the environment, the training, the incentives, and the support structures surrounding the individual.

    This is the core insight my research is built on. The same logic applies directly to cybersecurity. When employees click phishing links, share credentials, or bypass security protocols they are not simply making bad decisions. They are making predictable human decisions inside organizational environments that failed to adequately prepare, support, or incentivize secure behavior.

    THE CONNECTION TO THE 80 PERCENT

    The AHT scenario produced a measurable defect rate driven by structural and behavioral variables that had nothing to do with agent intent or capability in isolation. The agents were not failing. The system surrounding them was failing them.

    Apply that same lens to cybersecurity. If 80 percent of breaches involve human error the question is not why are humans making errors. The question is what organizational structures and behavioral conditions are producing those errors at scale. And more importantly, what leadership decisions created or allowed those conditions to exist in the first place.

    That is the question this research exists to answer.

    WHAT COMES NEXT

    Future pillar analysis posts will examine leadership culture as the third variable and begin building the literature foundation for how all three pillars interact as a system rather than operating independently. The AHT scenario demonstrated that structural and behavioral variables are inseparable in practice. The research will ultimately argue that cybersecurity resilience cannot be achieved by addressing any single pillar in isolation.

    The system is the problem. The system is also the solution.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784

  • Applying DMAIC to Average Handle Time: A Practice Analysis

    One of the most important things I have learned during my Six Sigma certification journey is that DMAIC is not just a framework you read about. It is something you have to apply before it actually makes sense. So I did exactly that. I took a real world technical support scenario and walked it through every phase of DMAIC to see if I actually understood what I was doing.

    The scenario involved a technical support team of 25 agents across two shifts. Over 45 days the team handled 18,750 interactions and 4,219 of them exceeded the company Average Handle Time target of 8 minutes per interaction. Leadership wanted answers. I used DMAIC to find them.

    DEFINE

    The problem was clear. 4,219 out of 18,750 interactions exceeded the 8 minute AHT target over a 45 day period. Missing AHT creates higher labor costs per interaction, longer queue times for customers, and potential impact on service levels and customer satisfaction. The Critical to Quality measure for this analysis was simple: any interaction exceeding 8 minutes counts as a defect.

    MEASURE

    This is where the data started telling the real story.

    Overall Performance:
    Total Interactions: 18,750
    Total Defects: 4,219
    Yield: 77.5%
    DPMO: 225,013
    Sigma Level: 2.26

    Morning Shift Performance:
    Total Interactions: 11,250
    Total Defects: 1,349
    Yield: 88.01%
    DPMO: 119,911
    Sigma Level: 2.68

    Evening Shift Performance:
    Total Interactions: 7,500
    Total Defects: 2,870
    Yield: 61.73%
    DPMO: 382,667
    Sigma Level: 1.80

    The overall team was operating at 2.26 sigma. But the evening shift was operating at 1.80 sigma while handling 40 percent of total call volume and producing 68 percent of all defects. The problem was not organization wide. It was shift specific.

    ANALYZE

    The data pointed clearly at the evening shift as the primary driver of AHT failure. Evening shift agents averaged only 0.8 years of tenure compared to 2.3 years on the morning shift. Three evening agents had been hired within the last 60 days. Customer reported reasons for long calls included complex issues requiring escalation, agents placing customers on hold multiple times, and customers having to repeat information already provided. These three reasons pointed to gaps in troubleshooting depth, inefficient knowledge navigation, and documentation breakdowns respectively.

    IMPROVE

    Five targeted solutions were implemented. First, accelerated coaching for newer evening agents focused on call control and troubleshooting structure. Second, a standardized decision tree for the most common complex issue types. Third, improved access to internal knowledge resources to reduce unnecessary hold usage. Fourth, improved case documentation standards to eliminate customers repeating information. Fifth, temporary shift specific performance check ins until results stabilized.

    CONTROL

    Weekly AHT tracking by shift, tenure group, and individual agent was implemented. A control dashboard was created covering total interactions, defect rate, and escalation frequency with separate visibility into morning and evening performance. New hires were placed into a defined ramp plan with milestone reviews at 30, 60, and 90 days. Standard troubleshooting workflows were formalized and reinforced through team meetings. If defect rates rise again leadership intervenes immediately with targeted retraining.

    WHAT I LEARNED

    The most important thing this exercise taught me is that DMAIC does not just find problems. It finds where problems actually live versus where they appear to live. Without measuring shift level sigma separately the evening shift defect concentration would have been buried inside the overall team number. The data forced precision that instinct alone could not have produced.

    I also learned that DPMO is the honest unit of measurement because it removes volume bias entirely. The evening shift handled fewer calls but produced a defect rate more than three times higher than the morning shift. Raw numbers would have hidden that and DPMO corrected it.

    This is what Six Sigma actually does. It makes the invisible visible.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784