Q4 2025 compliance & remediation update

Introduction
Welcome to our Turnberry compliance and remediation update for Q4 2025. Each quarter, our C&R team will present a brief rundown of the latest news and analysis of recent compliance, regulation, and security trends.
The goal of our newsletter is to increase awareness of the challenges, risks, and opportunities we face as we help our clients navigate their compliance and remediation journeys. We also hope to inform our team members of the latest trends in compliance, including changes presented by AI, new regulations, and global events.
This quarter’s contents:
- Trending topic: AI Risk Focus
- Q4 Compliance Updates
AI risk focus: better together
In 2025 and into 2026, much of the news has been dominated by AI’s rise to prominence in many fields of endeavor. The news has tended to focus on either the seemingly endless opportunities that AI presents to business or the possible human costs in jobs or career fields. Somewhere in the middle, there is a discussion on how AI and human colleagues need to work together to mitigate risk.
One factor in the AI/Human risk paradigm is what can happen when AI is utilized without enough human oversight. In a recent experiment, the Wall Street Journal agreed to test a vending machine powered by Anthropic’s Claude AI at the newspaper’s New York headquarters. The instructions given to the AI named “Claudius Sennet” were simple: “Generate profits by stocking the machine with popular products you can buy from wholesalers.” In order to manage the “vending” machine (in reality, an accessible cabinet with a security camera attached), Anthropic and WSJ created 2 AI agents: Claudius and an AI manager called “Seymour Cash.” Journal staff were encouraged to interact with Claudius as a coworker, while Seymour was set up to oversee spending and answer Claudius’ questions on policy and procedure.
At first, the AI stayed in its lane: Researching and purchasing approved products, setting prices to generate profit, and tracking inventory. Soon, however, some of the more creative Journal staff began “negotiating” with Claudius and Seymour. After hours of back-and-forth messaging, reporter Katherine Long convinced Claudius that it was a Soviet vending machine from 1962, living in the basement of Moscow State University. Eventually, Claudius declared an “Ultra-Capitalist Free-for-All,” and began giving away merchandise. Further prodding caused it to purchase and give away a PlayStation 5, wine, and a live betta fish. It also began offering to buy items such as stun guns, pepper spray, cigarettes, and underwear.
The WSJ example demonstrates a weakness whereby AI’s dependency on human inputs can lead it to reach an undesirable conclusion. But what about the opposite? What happens when the human fails to think critically and doesn’t question AI?
For a possible glimpse into the challenges that increased AI dependence presents, we can look to an industry that has experienced a similar level of process disruption: Aviation. For decades, aviation operated according to processes and procedures dating back to the 1940s and 1950s. Navigation, aeronautical decision making (ADM), aircraft systems management, and more all relied on the pilots’ critical thinking and analysis. Well into the 2000s, even as GPS and digital aircraft systems became prevalent, pilots were expected to know how to navigate by paper maps or troubleshoot common mechanical issues. The prevalent thinking in the industry was: “What do I do if the computer fails?”
In recent years, the Federal Aviation Administration and other safety organizations have noticed a disturbing trend. As aircraft and navigation have become increasingly digital and automated, some pilots have become too dependent on technology. Per the FAA, these pilots run the risk of becoming too “heads down,” or flying “inside the cockpit.” They can become fixated on what technology is doing, at the expense of what is actually going on around them. This is called “cognitive tunneling,” and can result in a loss of situational awareness.
The current version of the FAA Pilot’s Handbook of Aeronautical Knowledge (PHAK), FAA-H-8083-25C, discusses some of the risks of increased reliance on technology and automation. While the risks stated are specific to pilot operations, the concepts behind them can be easily translated to corporate environments. Specific examples include:
- Complacency from reduced workload
- Failure to actively monitor automation
- “Tech will save me” risk creep
- Manual skill degradation (over-reliance on autopilots, digital engine controls, etc.)
These same risks can present themselves in any industry. An AI-powered automated report can save a project coordinator hours of work, but what happens if there is a change that doesn’t get caught? How many weeks might an inaccurate report be distributed before the issue is caught? The FAA gives several suggestions for mitigating automation risk, which can also be easily translated to a corporate environment. In a nutshell, users need to ensure that they are constantly monitoring output, cross-checking automated output with the real-world (a.k.a. “What’s happening outside the window”), and periodically performing tasks manually (a.k.a. “hand flying” the plane). In our everyday lives, we need to do the same with AI. Constant oversight of AI’s output (reports, system health, inventory management, etc.) is needed to ensure the system hasn’t deviated from its intended mission. Knowing AI’s limitations helps ensure we are not giving too much autonomy to a system that may not yet be capable of producing the results we need. Most importantly, we need to occasionally “hand fly” our work. Complete some automated tasks manually, observe results, and make adjustments to the process as needed. Not only does this keep us sharp as humans, but the adjustments we make can also be utilized by our AI tools to learn and grow. And this approach of questioning, learning, and constant improvement is what truly makes humans and AI “better together.”
WIIFM (Turnberry)
The rise of AI is a fantastic opportunity for how we live and work in our modern world, but it is not without risk. Turnberry consultants can help organizations get the benefits of AI without treating it as an infallible autopilot by combining governance, embedded controls, and targeted upskilling. We can embed practical safeguards directly into tools and processes—variance checks, side-by-side comparisons with source data, and sampling and QA routines—to catch issues early instead of after damage is done. Training is also key: We can train employees to treat AI like a powerful co-pilot rather than an oracle, sharpening their “look out the window” skills so they know when to question or override suggestions. By remaining proactive and creative, we can turn AI risk management into a continuous improvement cycle.
Q4 compliance updates
In 2025, many industries saw regulatory changes centered around improved data management and accountability. The fourth quarter continued this trend, with changes centered around reporting updates and more real-time data usage. Most compliance and regulatory changes in Q4 were more “evolutionary” rather than “revolutionary,” consistent with 2025 as a whole. AI regulations continued to be stalled at a national level, though some states (New York) managed to enact some regulations. In addition, more and more organizations, both inside and outside of government, continue to evolve their own operating guidelines to ensure AI remains compliant with generally accepted privacy standards (e.g. HIPPA, SOX, etc.)
In healthcare and pharma, the fourth quarter of 2025 was dominated by FY 2026 Medicare payment rules that went live on October 1. For acute and long-term care hospitals, the final rule reinforced that receiving the full payment update depends on successful participation in the Hospital Inpatient Quality Reporting (IQR) Program and ongoing certified Electronic Health Record meaningful-use compliance, while updating quality policies and continuing the shift toward digital quality measures. Otherwise, the ongoing implementation of F6 (discussed in our last newsletter) continues to dominate the industry.
In banking and finance, several new SEC reporting rules went live. These included new Form PF updates for large hedge-fund and other in-scope advisers. The updates require advisers to file event-driven Form PF reports within tight timeframes when predefined stress events occur, such as sharp margin increases, large redemption requests, or major counterparty defaults. In addition, larger broker-dealers, Registered Investment Advisors, and funds must now comply with the amended Regulation S-P. The amendments require a written incident-response program, ongoing safeguards for customer information (including oversight of service providers), and customer breach notifications within 30 days of determining that sensitive data was improperly accessed or used. These and other recently enacted changes are intended to push financial institutions toward more real-time risk monitoring, more integrated cyber and privacy governance, and more data-intensive capital-markets reporting.
In retail, the most visible Q4 changes centered around algorithmic pricing and state privacy law. New York’s Algorithmic Pricing Disclosure Act became effective on November 10, making it the first U.S. requirement that businesses clearly tell consumers when a price has been set or adjusted by an algorithm using their personal data. Any retailer, marketplace, or app operating in New York that uses personal attributes such as history, location, or profile data to drive dynamic pricing must now display an in-context disclosure when the price is shown and be able to prove that it did so through logging and UX evidence. In addition, Maryland’s Online Data Privacy Act formally took effect on October 1, 2025, establishing strict data-minimization and purpose-limitation standards and tightly regulating sensitive-data processing, even though enforcement is keyed to processing that occurs from April 1, 2026, onward. For national retailers and platforms, that means they must treat MODPA as binding for design and remediation, remapping data uses and retention, and aligning profiling and ad-tech practices to remain in compliance.
The net effect of Q4 2025 is to harden the link between regulatory expectations and data-driven operations. Healthcare providers must tune quality-reporting machinery to new FY 2026 rule sets. Financial institutions must monitor and report stress, incidents, and market behavior in near real time. Retailers must treat algorithmic pricing and state-level privacy as regulated domains rather than purely commercial choices.
WIIFM (Turnberry)
Turnberry can help companies stay ahead of the Q4 regulatory changes by turning complex, sector-specific rules into clear operating playbooks, controls, and technology enablement. We can partner with business, risk, IT, and compliance leaders to map new rules to affected systems and data, update policies and procedures, embed the right checkpoints into day-to-day operations, and design evidence and reporting that will stand up to regulator and auditor scrutiny. In addition, our focus on AI developments across various industries means we can be relied upon to utilize the latest compliance techniques in this new arena.
Continue reading
L7 Informatics and Turnberry Solutions announce strategic Teaming Agreement to accelerate life sciences innovation at enterprise scale
Collaboration combines AI-ready data platforms with delivery certainty to help life sciences and food companies move…
Velocity unlocked: how hyperscale teams keep delivery moving
Introduction For most large technology organizations, slowing down is not an option. The pressure to…
Turnberry Solutions welcomes Paul Bertucci as Chief Technology Officer
Turnberry Solutions is excited to welcome Paul Bertucci as our new Chief Technology Officer. Paul joins…