Resilient Cyber Newsletter #73
- Chris Hughes from Resilient Cyber <resilientcyber@substack.com>
- Hidden Recipient <hidden@emailshot.io>
Resilient Cyber Newsletter #73Cybersecurity Incentives, AI Wildfire is Coming, Sins of Security Vendors, State of AI Report, Agentic AI Security, New OWASP Top 10 & the Evolution of AppSecWelcomeWelcome to issue #73 of the Resilient Cyber Newsletter. 2025 continues to be an interesting and exciting time, with frothy markets being disproportionately driven by AI, new and novel risks from AI agents, longstanding challenges with security vendors, and AppSec evolving from a new OWASP Top 10 to a revolution centered on moving from shifting left to runtime. I'll unpack everything this week, so let’s get started! Interested in sponsoring an issue of Resilient Cyber? This includes reaching over 40,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives Reach out below!
Cyber Leadership & Market DynamicsTwo Cyber Practitioners Accused of Hacking and Extorting U.S. CompaniesIn an odd story that got picked up by major news outlets such as CNN, two former employees of cyber firms have now been indicted and accused of participating in conspiracies to hack and extort U.S. firms. They were accused of deploying a popular ransomware against a medical device firm in FL, a pharmaceutical firm in MD, and a drone company in VA, among others. They tried to demand a $10 million payment, and even received $1.27 million in payments for their ransom activities. This is a case where practitioners may be tempted by the financial success of malicious actors using ransomware and decide to try it themselves. Unfortunately, they also apparently missed the reality that many end up getting caught and charged, especially if you’re here in the U.S. and not shielded internationally from extradition. Not Getting Incentives Right Can Kill a Security Initiative or Security StartupI find myself discussing incentives (or the lack thereof) in cybersecurity and how they drive behavior and the outcomes we observe in the cyber ecosystem. That is why I was happy to see my friend Ross Haleliuk recently publish a piece on the topic (while also giving me a shout-out). Ross penned an excellent piece highlighting how incentives drive behavior, including for organizations and developers. Developers are incentivized and promoted based on factors such as velocity and productivity, rather than security. Therefore, it shouldn’t be surprising when they don’t prioritize security. He is spot on and it is something I have spoken a lot about, in the sense of how we get frustrated when people don’t care about security as much as we do, and we shake our fist in anger all while ignoring the fact that incentives drive behavior and not only are developers not incentivized to prioritize security, but neither are organizations, as you look at the lack of impact to share prices, profits etc. when it comes to security incidents. That’s why efforts such as CISA’s voluntary, toothless Secure-by-Design pledge are mostly virtue signaling efforts that lack any real enforcement mechanisms. Speed to market and profits trump security, and honestly, security isn’t and never will be the organization's top priority, or their most pressing risk, as businesses grapple with a long list of competing risks, including constraints around capital, shareholder expectations, profits, competitive market share among peers, and more. I discussed this in depth in prior articles, such as: Ross provided a thought-provoking quadrant to go along with his article as well: Cyber firm F5 anticipates revenue hit after attackSpeaking of incentives, one of the primary concerns for corporations is revenue and profits. That is why it was interesting to see Cybersecurity firm F5, which has been a victim of a fairly damning attack recently, including the exposure of their source code, openly admit that they anticipate a revenue hit due to the incident. In a message to shareholders, the firm discussed how it’s anticipating the revenue hit. While it remains to be seen how much of a hit and the long-term impact it has, if any, it is sometimes refreshing to see some level of market reaction to cybersecurity incidents, even if they don’t last long, as the market often has a short memory. The AI Wildfire Is Coming. It’s going to Be Very Painful and Incredibly Healthy.While this is an “AI” piece, it is heavily focused on the market dynamics involved. It is genuinely one of the best and most comprehensive pieces I’ve come across yet on the whole AI bubble debate. The author Dion Lim puts together a fantastic article comparing the market to a wildfire, where tech cycles have peaks and valleys, including the need for fires to clear brush, redistribute talent, and leave infrastructure for following founders and players in the market. Dion walks through previous super cycles, including the burn of 2,000 and key players such as Amazon, eBay, Microsoft, and others; the growth of the Internet; the burn of 2008 and its key players; and the current AI cycle. Dion makes the case that what is unique to this cycle is unlike previous cycles, which had an abundance of small, overvalued startups; we have a market where the concentration is on the “tallest trees”, the most prominent players in the sense of Nvidia, OpenAI, and Microsoft, and their massive expenditures, investments, cross-investments, and more. This is something I have discussed in prior newsletter issues, where we discussed the outsized role these AI leaders are playing in the overall U.S. market growth and GDP, and the potential risks that they pose to the economy if there is a change of direction. Below are some great charts/diagrams from the article, and I really recommend reading it in its entirety. Busy Work GeneratorsThat’s how Adrian Sanabria describes most cybersecurity products, and he’s right. Most security tools are great at creating new problems and work for the organization. Alerts, notifications, findings - toil. The Sins of Security Vendor ResearchI’m a big fan of security vendor research. I routinely read, cite, and share vendor reports on the attack landscape, threats, and trends. However, like many others, I’m not naive to the fact that these reports and research often have a bias or an angle to them as well. Which is why I really enjoyed Rami McCarthy’s piece titled “The Sins of Security Vendor Research”, where he discussed some of these behaviors that influence security vendor research and the unintended consequences it can have. He cites the four sins as:
Rami discusses the negative impacts and outcomes that each of these sins has, and how they can alienate the very audience the researchers and vendors set out to share their research with or influence in the first place. I see this all the time, with teams producing reports that contain conclusions and findings justifying their products. On the one hand, it makes sense, given that it focuses on the problem space they’re most passionate about. On the other hand, it is also a fine line to ensure the reporting and research have merit and aren’t just leading customers to buy their products. AIState of AI ReportI recently came across this amazingly comprehensive breakdown of the state of AI from Nathan Benaich at Air Street Capital. It looks at advances and key themes of AI throughout 2025, including Research, Industry, Politics, Safety, and Predictions. This includes advances in models, market growth among key players, the geopolitics between China and the U.S. in the AI race, as well as key aspects related to Cybersecurity. I strongly recommend checking out the full presentation. Nathan has also put together a video walking through the presentation and key takeaways for those who prefer a video breakdown. Claude Pirate: Abusing Anthropic’s File API for Data ExfiltrationWe continue to see the expansion of AI tools such as Cursor, Windsurf, and Claude among others, and, every time the capabilities expand, so do potential attack scenarios. This blog highlights how you can use Claude’s new ability to perform network requests to potentially exfiltrate data that users have access to. LLM Memory Systems ExplainedOne of the key topics when it comes to LLMs that discussed in the industry is memory, or lack thereof for LLMs. This blog was an excellent primer on the topic that I found helpful for better understanding LLMs and memory. It is dubbed as a “introductory guide to how LLMs handle ‘memory’, from context windows to retrieval systems and everything in between”. The key theme is that LLMs do not remember anything, they are stateless, with each inference being independent and using prior messages as inputs to form a “context window”. The article does an excellent job of not only explaining LLMs and memory but the “three competing constraints” this lack of memory creates: The piece goes on to discuss techniques we use to get around these limitations including Summarization, Fast Tracking and Retrieval Systems, each of which comes with their own benefits and drawbacks. Some of these aren’t just performance related but also have security and privacy implications as well, and it is worth noting some providers may be using these techniques with or without your consent or awareness. Agentic AI Identity 101 Cheat SheetI keep running into teams looking to use AI Agents that can read data, call APIs, and even make changes across core business infrastructure. Resilient Cyber w/ Kamal Shah - The State of AI in SecOpsIn this episode of Resilient Cyber, I sit down with Kamal Shah, Cofounder and CEO at Prophet Security, to discuss the State of AI in SecOps. Prefer to listen? Please be sure to leaving a rating and review, it helps a ton. Agentic AI Security - Threats, Defenses, Evaluation and Open ChallengesAs we all know, we’re in the “decade” of Agents (queue Karpathy), with excitement around a near infinite set of use cases and potential. That said, as noted by the OWASP GenAI Security Project and others, Agentic AI also poses numerous threats and security challenges. Below is their comprehensive taxonomy of security threats to agentic AI: It also has useful diagrams to demonstrate direct and indirect prompt injection: The paper discusses relevant threats, mitigations and open challenges at depth and is an excellent publication to read. AppSecThe OWASP Top 10 Gets ModernizedThis week, at the event in D.C, OWASP® Foundation announced their 2025 OWASP Top 10. The update reflects the most significant update since 2021. This includes some categories being revised, rankings changing and the most vast AppSec dataset to date. I breakdown some of them major changes in a recent blog, including.
The Evolution of AppSec - From Shifting Left to Rallying on RuntimeIf you’ve been paying attention in AppSec over the last several years, we have seen the community get frustrated with the way we have tried to “shift left”, coupled with an emphasis on runtime visibility, creating the new category of Application Detection & Response (ADR). - We’ve come to grips with the reality that while we should still embed security early in the SDLC, we can’t prevent all vulnerabilities from reaching production.
GitHub Copilot With Major AnnouncementsGitHub Copilot recently announced some major capability releases, including the ability to assign code scanning alerts to Copilot for automated fixes, as well as Copilot coding agents now automatically validating code security and quality. Moving forward, the goal is for new code generated by Copilot’s coding agent to automatically be analyzed by GitHub’s security and quality validation tools, such as CodeQL and secret scanning, and analyzing risks in dependencies against the GitHub Advisory Database. They also mentioned that these new features do not require a GitHub Advanced Security (GHAS) license. I found these announcements noteworthy due to GitHubs dominant role in the development ecosystem, so them natively integrating these security capabilities has the potential to make a systemic impact on vulnerabilities and risks. False Negatives in SAST: Hidden Risks Behind the NoiseWe hear a ton about false positives in SAST tools, and rightfully so, as it wastes a ton of time among developers and AppSec practitioners. But, what about false negatives and the potential real risks organizations overlook as a result of them? This piece from Endor Labs highlights the latest research from academia and industry alike, demonstrating how SAST tools miss between 47% to 80% of vulnerabilities in real-world tests, and even combining multiple SAST tools only reduced false negative rates 30% to 69%, which also creating even more false positives. The article highlights an alarming stat:
The blog goes on to highlight the limitations that cause SAST tools to miss findings, such a logic and context problems and the uncomfortable tradeoffs organizations have to make when using SAST. Many are excited about the combination of AI and SAST into AI SAST tools which can address complex business logic and other shortcomings of traditional SAST. “Fund Us or Stop Sending Bugs”A bit of an uproar erupted on X recently involving an open source framework in FFmpeg and Google and others. It revolved around open source vulnerabilities, security disclosures and the responsibilities of large tech firms when it comes to open source. The debate involves Google’s AI agent finding bugs in FFmpeg, and the bugs being obscure, and the open source project maintainers emphasizing that they are volunteers and asking that Google either fund the remediations, or provide patches themselves with bug reports. This is a really fascinating case where the longstanding voluntary and unfunded nature of open source is running into the use of AI to find bugs which exacerbates demands on the voluntary maintainers. Invite your friends and earn rewardsIf you enjoy Resilient Cyber, share it with your friends and earn rewards when they subscribe. |
Similar newsletters
There are other similar shared emails that you might be interested in:

















