Author: Robert Reinhardt

  • Project Glasswing and the Three Pillars: A Real World Case Study in Organizational Cybersecurity Leadership

    Today Anthropic announced Project Glasswing, a coalition initiative pairing an unreleased AI model called Claude Mythos Preview with twelve of the world’s largest technology companies including Apple, Amazon, Microsoft, Google, Cisco, CrowdStrike, and Palo Alto Networks. The stated goal is to give cybersecurity defenders a head start in identifying and patching critical vulnerabilities before AI capabilities of this magnitude become broadly available to bad actors.

    I want to document this announcement not just as a news item but as a living case study that maps directly onto the three research pillars driving my doctoral work. Because what Anthropic did today is exactly what my research argues organizations need to do, and almost never do, when facing a cybersecurity threat of extraordinary consequence.

    PILLAR ONE: LEADERSHIP CULTURE

    The most striking element of today’s announcement is what did not happen. Anthropic built a model so capable that it identified thousands of previously unknown zero-day vulnerabilities across every major operating system and browser in a matter of weeks. And then they chose not to release it.

    That decision did not happen automatically. Someone inside Anthropic looked at what this model could do and said the risk to human safety outweighs the commercial benefit of releasing it. That is a leadership culture decision. And it is a rare one.

    What I want to know, and what I intend to research further, is where exactly that decision was made inside the organization. Was it a single executive? A committee? A cross functional ethics review? Did it come from the top down or was it raised from within the research team and escalated upward? The answer to that question tells us something important about what kind of leadership culture produces decisions like this one. Because the decision itself is not the interesting part. The organizational conditions that made the decision possible are the interesting part.

    My research argues that leadership culture is one of three variables that independently and collectively determine an organization’s cybersecurity resilience. Today Anthropic demonstrated what a security affirming leadership culture looks like at the highest possible stakes. It looks like choosing restraint over revenue when the consequences of getting it wrong are catastrophic.

    PILLAR TWO: ORGANIZATIONAL STRUCTURE

    The coalition structure Anthropic built for Project Glasswing is unusual and worth examining carefully. Technology companies typically guard their capabilities as competitive advantages. Proprietary data, internal tools, and frontier research are held close precisely because sharing them reduces the advantage they represent.

    Anthropic made a different structural decision. Rather than acting alone they built a distributed coalition where each partner contributes complementary capabilities. AWS brings cloud infrastructure knowledge. CrowdStrike brings endpoint visibility across a trillion daily events. Apple brings device ecosystem expertise. The Linux Foundation brings open source software maintenance access. The structure is designed so that the collective defensive capability exceeds what any single organization could produce independently.

    This is an organizational structure argument. The decision about how to organize the response to a threat determines the effectiveness of that response as much as the tools being used. Twelve organizations with aligned defensive goals and complementary structural capabilities can accelerate progress at a pace that no single organization can match regardless of how well resourced they are.

    One observation worth raising. Android is notably absent from this coalition. Given that Apple’s device ecosystem knowledge is explicitly cited as a contributor to the partnership’s value, the absence of Google’s Android team as a distinct contributor rather than Google’s cloud infrastructure team creates a gap in mobile ecosystem coverage that seems worth addressing. The same tool demonstrating capabilities across Apple’s platforms should be applied with equal depth across Android’s. The coalition structure is strong. It could be stronger.

    PILLAR THREE: HUMAN BEHAVIOR

    The entire premise of Project Glasswing rests on a human behavior argument. Anthropic’s own announcement states it directly. The same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software. The technology is neutral. The human behavior surrounding it determines whether it protects or destroys.

    What Anthropic is racing against is not a technical problem. It is a human behavior problem at civilizational scale. The question is not whether AI models capable of finding and exploiting critical vulnerabilities will exist. They already do. The question is whether the humans and organizations with access to these capabilities will use them defensively before the humans and organizations with malicious intent develop equivalent capabilities independently.

    That is a human behavior and organizational culture race. And it is exactly the dynamic my research is built around at the organizational level. Inside any company the same question applies on a smaller but no less consequential scale. Will the humans inside this organization behave in ways that strengthen its security posture or will their decisions, habits, and gaps in awareness create the vulnerability that an attacker eventually exploits?

    A PERSONAL RESEARCH NOTE

    As a cybersecurity researcher I find myself wanting access to a tool like Claude Mythos Preview. Not for offensive purposes but for exactly the kind of research this journal documents. Understanding how AI augmented threat detection interacts with human decision making inside organizations is directly relevant to my doctoral work.

    This raises an interesting policy question that I believe deserves serious academic attention. How do you design an access verification system for frontier AI cybersecurity tools that validates legitimate research intent while creating accountability for misuse? The model Anthropic has used for Project Glasswing, limiting access to verified organizational partners with defined defensive use cases, is one approach. But it excludes independent researchers and smaller organizations that may have equally legitimate defensive needs.

    A credentialed access framework tied to verified researcher identity, something analogous to what ORCID provides for academic publishing, could be a meaningful contribution to this problem. Access granted to verified individuals. Usage logged and attributed. Behavioral guardrails built into the model itself that prevent harmful outputs even when accessed by legitimate users. The cybersecurity field calls this concept a sandbox environment where potentially dangerous tools can be used in controlled conditions that limit their capacity to cause harm beyond the testing environment.

    This is not a solved problem. It is an open research question. And it is one I intend to return to as my doctoral work develops.

    CONCLUSION

    Project Glasswing is not just a technology announcement. It is a demonstration of what organizational cybersecurity leadership looks like when the stakes are existential. Leadership culture that prioritizes human safety over commercial gain. Organizational structure that distributes defensive capability across complementary partners rather than hoarding it. And a human behavior framework that acknowledges the most dangerous variable in the entire cybersecurity equation is not the technology. It is the humans who decide how to use it.

    My research exists to understand how those three variables interact inside organizations at every scale. Today Anthropic gave the field a case study worth studying carefully.

    I will be watching how this develops.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784
    www.businessresearchjournal.com

    References

    Anthropic. (2026, April 7). Project Glasswing: Securing critical software for the AI era. Anthropic. https://www.anthropic.com/glasswing

    Anthropic Frontier Red Team. (2026, April 7). Claude Mythos Preview: Technical details on cybersecurity capabilities. Anthropic Red Team Blog. https://red.anthropic.com/2026/mythos-preview

  • Pillar Analysis: What an AHT Defect Reveals About Organizational Structure and Human Behavior

    My doctoral research is built on three pillars. Leadership culture, organizational structure, and human behavior. The working thesis examines how these three variables independently and collectively contribute to the 80 percent of cybersecurity breaches attributed to human error. But before I can apply that framework to cybersecurity I need to demonstrate that I understand how these pillars interact in any organizational context. This post does exactly that using a technical support AHT scenario as the laboratory.

    THE SCENARIO IN BRIEF

    A technical support team of 25 agents across two shifts was missing its 8 minute Average Handle Time target. Over 45 days, 4,219 out of 18,750 interactions exceeded the target. When measured by shift the evening team was operating at 1.80 sigma while the morning team operated at 2.68 sigma. The evening shift handled 40 percent of total volume but produced 68 percent of all defects.

    PILLAR ONE: ORGANIZATIONAL STRUCTURE

    The structural failure in this scenario existed before a single call was ever answered. The evening shift was staffed with 10 agents, three of whom had been hired within the last 60 days, against a morning shift of 15 agents averaging 2.3 years of tenure. That structural imbalance meant the evening shift was being asked to perform at the same standard as the morning shift without the organizational support, tenure depth, or staffing density to make that realistic.

    Organizational structure does not just mean org charts and reporting lines. It means how resources are distributed, how teams are built, and whether the design of the organization makes success possible or quietly makes failure inevitable. In this case the structure made failure statistically predictable before anyone looked at a single performance metric.

    This is directly relevant to cybersecurity. Organizations that distribute security responsibilities unevenly across teams, that staff security functions inadequately, or that design processes without accounting for human capacity constraints are building structural vulnerability into their security posture before a single threat ever arrives.

    PILLAR TWO: HUMAN BEHAVIOR

    The three customer reported reasons for long calls reveal the human behavior dimension of this scenario precisely. Complex issues requiring escalation suggest agents lacking the confidence or knowledge to resolve problems independently. Multiple holds suggest agents navigating uncertainty in real time rather than arriving at calls prepared. Customers repeating information suggests agents not documenting interactions thoroughly enough to maintain continuity.

    None of these are character flaws. They are behavioral patterns produced by insufficient preparation, inadequate tools, and unclear process expectations. Human behavior in organizational contexts is rarely random. It is shaped by the environment, the training, the incentives, and the support structures surrounding the individual.

    This is the core insight my research is built on. The same logic applies directly to cybersecurity. When employees click phishing links, share credentials, or bypass security protocols they are not simply making bad decisions. They are making predictable human decisions inside organizational environments that failed to adequately prepare, support, or incentivize secure behavior.

    THE CONNECTION TO THE 80 PERCENT

    The AHT scenario produced a measurable defect rate driven by structural and behavioral variables that had nothing to do with agent intent or capability in isolation. The agents were not failing. The system surrounding them was failing them.

    Apply that same lens to cybersecurity. If 80 percent of breaches involve human error the question is not why are humans making errors. The question is what organizational structures and behavioral conditions are producing those errors at scale. And more importantly, what leadership decisions created or allowed those conditions to exist in the first place.

    That is the question this research exists to answer.

    WHAT COMES NEXT

    Future pillar analysis posts will examine leadership culture as the third variable and begin building the literature foundation for how all three pillars interact as a system rather than operating independently. The AHT scenario demonstrated that structural and behavioral variables are inseparable in practice. The research will ultimately argue that cybersecurity resilience cannot be achieved by addressing any single pillar in isolation.

    The system is the problem. The system is also the solution.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784

  • Applying DMAIC to Average Handle Time: A Practice Analysis

    One of the most important things I have learned during my Six Sigma certification journey is that DMAIC is not just a framework you read about. It is something you have to apply before it actually makes sense. So I did exactly that. I took a real world technical support scenario and walked it through every phase of DMAIC to see if I actually understood what I was doing.

    The scenario involved a technical support team of 25 agents across two shifts. Over 45 days the team handled 18,750 interactions and 4,219 of them exceeded the company Average Handle Time target of 8 minutes per interaction. Leadership wanted answers. I used DMAIC to find them.

    DEFINE

    The problem was clear. 4,219 out of 18,750 interactions exceeded the 8 minute AHT target over a 45 day period. Missing AHT creates higher labor costs per interaction, longer queue times for customers, and potential impact on service levels and customer satisfaction. The Critical to Quality measure for this analysis was simple: any interaction exceeding 8 minutes counts as a defect.

    MEASURE

    This is where the data started telling the real story.

    Overall Performance:
    Total Interactions: 18,750
    Total Defects: 4,219
    Yield: 77.5%
    DPMO: 225,013
    Sigma Level: 2.26

    Morning Shift Performance:
    Total Interactions: 11,250
    Total Defects: 1,349
    Yield: 88.01%
    DPMO: 119,911
    Sigma Level: 2.68

    Evening Shift Performance:
    Total Interactions: 7,500
    Total Defects: 2,870
    Yield: 61.73%
    DPMO: 382,667
    Sigma Level: 1.80

    The overall team was operating at 2.26 sigma. But the evening shift was operating at 1.80 sigma while handling 40 percent of total call volume and producing 68 percent of all defects. The problem was not organization wide. It was shift specific.

    ANALYZE

    The data pointed clearly at the evening shift as the primary driver of AHT failure. Evening shift agents averaged only 0.8 years of tenure compared to 2.3 years on the morning shift. Three evening agents had been hired within the last 60 days. Customer reported reasons for long calls included complex issues requiring escalation, agents placing customers on hold multiple times, and customers having to repeat information already provided. These three reasons pointed to gaps in troubleshooting depth, inefficient knowledge navigation, and documentation breakdowns respectively.

    IMPROVE

    Five targeted solutions were implemented. First, accelerated coaching for newer evening agents focused on call control and troubleshooting structure. Second, a standardized decision tree for the most common complex issue types. Third, improved access to internal knowledge resources to reduce unnecessary hold usage. Fourth, improved case documentation standards to eliminate customers repeating information. Fifth, temporary shift specific performance check ins until results stabilized.

    CONTROL

    Weekly AHT tracking by shift, tenure group, and individual agent was implemented. A control dashboard was created covering total interactions, defect rate, and escalation frequency with separate visibility into morning and evening performance. New hires were placed into a defined ramp plan with milestone reviews at 30, 60, and 90 days. Standard troubleshooting workflows were formalized and reinforced through team meetings. If defect rates rise again leadership intervenes immediately with targeted retraining.

    WHAT I LEARNED

    The most important thing this exercise taught me is that DMAIC does not just find problems. It finds where problems actually live versus where they appear to live. Without measuring shift level sigma separately the evening shift defect concentration would have been buried inside the overall team number. The data forced precision that instinct alone could not have produced.

    I also learned that DPMO is the honest unit of measurement because it removes volume bias entirely. The evening shift handled fewer calls but produced a defect rate more than three times higher than the morning shift. Raw numbers would have hidden that and DPMO corrected it.

    This is what Six Sigma actually does. It makes the invisible visible.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784

  • Thesis Draft 001: The 80 Percent Problem

    There is a statistic that the cybersecurity industry acknowledges freely but has not adequately answered. Over 80 percent of cybersecurity breaches involve human error in some form. Not sophisticated zero-day exploits. Not hardware failures. People. And if that number is accurate, and the data consistently suggests it is, then why does the overwhelming majority of cybersecurity research remain focused on technical solutions to what is fundamentally a human problem?

    That question is where this research begins.

    The working thesis I am developing proposes that three variables, leadership culture, organizational structure, and human behavior, independently and collectively contribute to that 80 percent figure in ways that have not been tested together as a system. Most existing research isolates one variable at a time. Leadership studies rarely connect directly to measurable breach outcomes. Human error studies rarely ask why the organization allowed that error culture to develop in the first place. Organizational structure research rarely examines how hierarchy and communication design shape the daily security decisions of frontline employees.

    So what happens when you test all three simultaneously against real breach data? That is the question this body of research intends to answer. The current thesis statement that I have drafted reads as follows:

    To what extent do leadership culture, organizational structure, and human behavior independently and collectively contribute to the 80 percent of cybersecurity breaches attributed to human error, and what organizational interventions most effectively reduce that vulnerability?

    This is Draft 001. It will evolve. But the core argument is already clear. If organizations continue treating cybersecurity as a technology problem while ignoring the leadership and behavioral systems surrounding that technology, the 80 percent figure will not move. That figure is not a technical failure, I strongly believe that data will show it’s an organizational one.

    The research that follows this post will begin building the literature foundation for each of the three pillars. What does existing scholarship say about leadership culture and security outcomes? Where does organizational structure create invisible vulnerabilities? And what does behavioral science tell us about why people make poor security decisions even when they know better?

    Those are the questions this journal exists to pursue.

    Robert A. Reinhardt
    Independent Researcher
    ORCID: 0009-0007-6568-9784


  • Understanding the Inclusion-Exclusion Principle (Set Union Formula for Three-Set Functions)

    Formula: | A ∪ B ∪ C | = | A | + | B | + | C | – | A ∩ B | – | A ∩ C | – | B ∩ C | + | A ∩ B ∩ C |
    A represents Data Set A
    B represents Data Set B
    C represents Data Set C
    A ∪ B represents the overlapping data sets
    A ∩ C represents the overlapping data sets
    B ∩ C represents the overlapping data sets
    A ∩ B ∩ C represents the middle center overlapping data sets

  • Understanding the Inclusion-Exclusion Principle (Set Union Formula for Two-Set Functions)

    Fascinated by my vague attempt at performing the Practice GMAT (Score: 11/15), I found myself diving deeper into the Set Union Formula (Two-Set Function) and it’s revelance to the Venn Diagram. What’s important to note is that this formula helps when you need to compare two or more data sets from groups that overlap with each other. But I discovered that you’ll need to know three important things:

    • Know how many are in the group individually
    • How many are in both groups
    • Want to find how many are in either group or both

    Formula: | A ∪ B | = | A | + | B | – | A ∩ B |
    A represents Data Set A
    B represents Data Set B
    A υ B represents the total of both data sets
    A n B represents the overlapping data sets

    Learning Source: https://www.youtube.com/watch?v=YlKDp03Kg68