“Have fun at RSA!” These are the words I hear from friends and family and colleagues at work that don’t have the opportunity to make the pilgrimage to San Francisco for the largest gathering of security folks of the year, the RSA Conference. Regardless of whether you are a vendor, buyer, or general attendee, you likely approach RSA with one part anticipation and one part dread. The conference has grown so big that it’s sucked in a big chunk of downtown San Francisco – first the North Hall of the Moscone, then the West Hall, and this year spillover sessions at the Marriott Marquee.
Two weeks prior to RSA this year, I was stuck on a conference call and thought it would be fun to make a t-shirt that would allow me to scoot quickly through the expo halls at RSA. Given that the exhibition halls at RSA are the business and selling side of the security market, the goal of exhibitors is to engage in conversation with attendees that wander by their booth, and to quickly “qualify” them to make sure they can actually purchase goods or services from them.
Year after year, I’ve had to say versions of “no, thank you” or “don’t shoot, I’m a civilian” to deflect these inquiries. After nearly 20 years of RSAs and other security conferences, I needed an efficiency play, and I thought a t-shirt communicating to the world that I had no purchase authority would be my best bet.
The irony, of course, is that I’m a vendor too. My efficiency play was more to save time for both me and the vendors, because I, in fact, have no purchase authority. Instead of stopping every 10 feet to explain, I wanted to be able to motor around the exhibits with my elven invisibility cloak. Voila, t-shirt! What started as a bit of a harmless fun, quickly turned viral when @nousie posted this Tweet Monday night of RSA week when I was hanging out at the W Hotel, again ironically, waiting for a client.
I had no idea how much the t-shirt would resonate with attendees, from other vendors (“that’s hilarious man”) to security leaders who had purchase authority (“you have got to send me one of those!”). What started out a bit of fun blew up my Twitter feed by Tuesday morning of RSA.
By Monday afternoon I was outed, then I decided to have fun with it. Friends like Jack Daniel, Trey Ford, Wendy Nather, Tom Brennan, and Jeff LoSapio got a kick out of it too, as well as many others.
Perhaps the most fun with exhibitors, to a person were good sports, including:
The Laura Agawal and a sad marketing guy at HyTrust
With Caroline Bernier and Anna O’Donoghue of Avecto
With Elisa Lippincott, Trend Micro
Kate Brew and alien baby from AlienVault
And finally Joy Powers at Spark Minute
My favorite moment was when I overhead an exhibitor say “it’s no longer about the security of the network, it’s now about network security.” I still don’t know what that means… My t-shirt was my feeble response to the over-the-top experience that the RSA Security Conference has become. So we had a bit of fun, and not at the expense of anyone (really). The response to the t-shirt was exceedingly positive, which was probably indicative of the collective reaction to overwhelming experience in and around the Moscone Center during RSA 2017. The sensory overload, the armies of attendees trying to squeeze through tight spots in the Moscone Center (under construction), and free-flowing alcohol coming from different directions at all hours of the day…
At least we have until BlackHat 2017 to recover.
]]>
“Have fun at RSA!” These are the words I hear from friends and family and colleagues at work that don’t have the opportunity to make the pilgrimage to San Francisco for the largest gathering of security folks of the year, the RSA Conference. Regardless of whether you are a vendor, buyer, or general attendee, you likely approach RSA with one part anticipation and one part dread. The conference has grown so big that it’s sucked in a big chunk of downtown San Francisco – first the North Hall of the Moscone, then the West Hall, and this year spillover sessions at the Marriott Marquee.
Two weeks prior to RSA this year, I was stuck on a conference call and thought it would be fun to make a t-shirt that would allow me to scoot quickly through the expo halls at RSA. Given that the exhibition halls at RSA are the business and selling side of the security market, the goal of exhibitors is to engage in conversation with attendees that wander by their booth, and to quickly “qualify” them to make sure they can actually purchase goods or services from them.
Year after year, I’ve had to say versions of “no, thank you” or “don’t shoot, I’m a civilian” to deflect these inquiries. After nearly 20 years of RSAs and other security conferences, I needed an efficiency play, and I thought a t-shirt communicating to the world that I had no purchase authority would be my best bet.
The irony, of course, is that I’m a vendor too. My efficiency play was more to save time for both me and the vendors, because I, in fact, have no purchase authority. Instead of stopping every 10 feet to explain, I wanted to be able to motor around the exhibits with my elven invisibility cloak. Voila, t-shirt! What started as a bit of a harmless fun, quickly turned viral when @nousie posted this Tweet Monday night of RSA week when I was hanging out at the W Hotel, again ironically, waiting for a client.
I had no idea how much the t-shirt would resonate with attendees, from other vendors (“that’s hilarious man”) to security leaders who had purchase authority (“you have got to send me one of those!”). What started out a bit of fun blew up my Twitter feed by Tuesday morning of RSA.
By Monday afternoon I was outed, then I decided to have fun with it. Friends like Jack Daniel, Trey Ford, Wendy Nather, Tom Brennan, and Jeff LoSapio got a kick out of it too, as well as many others.
Perhaps the most fun with exhibitors, to a person were good sports, including:
The Laura Agawal and a sad marketing guy at HyTrust
With Caroline Bernier and Anna O’Donoghue of Avecto
With Elisa Lippincott, Trend Micro
Kate Brew and alien baby from AlienVault
And finally Joy Powers at Spark Minute
My favorite moment was when I overhead an exhibitor say “it’s no longer about the security of the network, it’s now about network security.” I still don’t know what that means… My t-shirt was my feeble response to the over-the-top experience that the RSA Security Conference has become. So we had a bit of fun, and not at the expense of anyone (really). The response to the t-shirt was exceedingly positive, which was probably indicative of the collective reaction to overwhelming experience in and around the Moscone Center during RSA 2017. The sensory overload, the armies of attendees trying to squeeze through tight spots in the Moscone Center (under construction), and free-flowing alcohol coming from different directions at all hours of the day…
At least we have until BlackHat 2017 to recover.
]]>
Now that the inauguration and many of the Senate confirmation hearings are behind us, I’m starting to gather my thoughts as a security guy around cybersecurity policy in the new administration and where President Trump might take us all.
Let me state up front that I’m not an apologist for the President, nor do I plan to praise him. What I’m struggling with – along with many Americans – is understanding how President Trump will address cybersecurity issues.
What can we make of President Trump’s public statements on cybersecurity? Where will cybersecurity leadership come from and how will policy be shaped by the status quo and unforeseen foreign policy crises yet unimagined?
To restate the obvious, President Trump left us few gems during the debates and campaign, including:
Aside from lighting up Twitter, creating tons of internet memes and making many in the security community groan, I doubt there is much we can infer from Trump’s off-the-cuff remarks about cybersecurity during the debate. Couple these with his stream-of-consciousness remarks about Russians and hacking and we’re presented with a larger, arguably confusing body of work.
I tried to make sense of what he said, and I couldn’t. Then fellow Texan and Texas Tribune President Evan Smith, pointed me in the direction of political commentary written by Salena Zito of the Atlantic Magazine in the run-up to the November election. What Selena observed was that the press completely misread then-candidate Trump. In fact, she argued that:
While dismissing Trump, the media dissected every tweet. His supporters, however, took him completely seriously, giving him broad license to comment on whatever came to mind without giving it a second thought – likely because of their frustration with the status quo.
This struck a chord with me, offering several reasons to believe that regardless of what President Trump said on the campaign trail, cybersecurity policy will evolve under his term of office. In fact, we might even see progress on several fronts, for example:
How the next four years turn out for cybersecurity is still up for grabs, in fact, we could say that for all policy issues as we just don’t know if President Trump is crazy or crazy smart. Let’s hope during his administration we make more progress on cybersecurity issues that result in better protecting ourselves from the most obvious threat.
]]>Now that the inauguration and many of the Senate confirmation hearings are behind us, I’m starting to gather my thoughts as a security guy around cybersecurity policy in the new administration and where President Trump might take us all.
Let me state up front that I’m not an apologist for the President, nor do I plan to praise him. What I’m struggling with – along with many Americans – is understanding how President Trump will address cybersecurity issues.
What can we make of President Trump’s public statements on cybersecurity? Where will cybersecurity leadership come from and how will policy be shaped by the status quo and unforeseen foreign policy crises yet unimagined?
To restate the obvious, President Trump left us few gems during the debates and campaign, including:
Aside from lighting up Twitter, creating tons of internet memes and making many in the security community groan, I doubt there is much we can infer from Trump’s off-the-cuff remarks about cybersecurity during the debate. Couple these with his stream-of-consciousness remarks about Russians and hacking and we’re presented with a larger, arguably confusing body of work.
I tried to make sense of what he said, and I couldn’t. Then fellow Texan and Texas Tribune President Evan Smith, pointed me in the direction of political commentary written by Salena Zito of the Atlantic Magazine in the run-up to the November election. What Selena observed was that the press completely misread then-candidate Trump. In fact, she argued that:
While dismissing Trump, the media dissected every tweet. His supporters, however, took him completely seriously, giving him broad license to comment on whatever came to mind without giving it a second thought – likely because of their frustration with the status quo.
This struck a chord with me, offering several reasons to believe that regardless of what President Trump said on the campaign trail, cybersecurity policy will evolve under his term of office. In fact, we might even see progress on several fronts, for example:
How the next four years turn out for cybersecurity is still up for grabs, in fact, we could say that for all policy issues as we just don’t know if President Trump is crazy or crazy smart. Let’s hope during his administration we make more progress on cybersecurity issues that result in better protecting ourselves from the most obvious threat.
]]>
We recently announced the release of ThreadFix 2.4 which includes our patent-pending HotSpot technology that identifies where internal teams are sharing code among themselves and where that code has vulnerabilities. Similar to what solutions like BlackDuck, Sonatype, and OWASP Dependency Check do for vulnerabilities in known open source components – but for code developed inside your organization. Let’s look at the Why and the How.
At a macro level, the ThreadFix platform lets organizations manage their application security programs. A huge part of running an effective application security program is making the hard decisions about what vulnerabilities to fix and when. You will never have enough resources to fix everything so you have to be smart about what you do fix. HotSpot helps give you better information about what vulnerabilities might be the most impactful to fix – because if you can fix a vulnerability developed by one team and then have that code make its way down to multiple consuming teams your code-level fix of one vulnerability will result in multiple vulnerabilities being addressed. Identifying opportunities for leverage like this can be critical as you struggle to make progress reducing your risk exposure from application-level vulnerabilities. Knowing that a specific vulnerability is repeated across applications isn’t the only factor that goes into a remediation decision, but having that knowledge provides valuable context.
More specifically, the idea for HotSpot came about the way most great features do – from working with customers addressing challenges in their environments. As part of our consulting practice, we do a lot of work with organizations helping them remediate vulnerabilities identified in their applications. A big part of this is helping organizations roll out static analysis programs (SAST in Gartner-speak). Typically when we help a team get their application put through its first static analysis run, we’ll sit down with team leaders to help characterize the results – working with them to prioritize the findings and provide insight on the most efficient ways to address the issues. What we noticed over time is that we had a lot of conversations that went something like this:
Security Analyst: This one is a very serious vulnerability and you’ll want to get it fixed as soon as possible.
Customer: We can’t fix that. We don’t own that code.
Security Analyst: Could you be more specific?
Customer: We get that code from the [XYZ] team. We can’t change it.
Security Analyst: So I guess we’re going to need to talk to the [XYZ] team…
After we had enough of these conversations we had the idea: Can we proactively identify situations where this type of inter-team code sharing is happening and where the code being shared is responsible for serious vulnerabilities? Because if we could then we could get out in front of these conversations, and, more importantly, we could start to map out how code was flowing within these organizations and identify risky situations where code with vulnerabilities was “infecting” additional applications.
Let’s look at how most enterprise applications are constructed as well as a couple of scenarios where you would find vulnerabilities in those applications:
Most modern enterprise applications don’t solely consist of code specifically written for that application. In fact, most the code probably consists of both open source and commercial libraries and frameworks. In a large enterprise, this code is likely augmented by code developed internal to the organization, but shared across teams and applications. Finally, applications have whatever code is specific to that application.
In Scenario (1) we are looking at a vulnerability where an attack attempting to exploit that vulnerability will enter the application in application-specific code, possibly flow through enterprise-specific components, but the vulnerability data or control flow stops somewhere in a generally-available commercial or open source library or framework. As mentioned above, these are the types of vulnerabilities that are identified by Component Lifecycle Management (CLM) solutions like BlackDuck, Sonatype, and OWASP Dependency Check.
In Scenarios (2) and (3) we are looking at vulnerabilities where an attack would enter the application via the application-specific code, but also where the attack data or control flow also ends within code developed within the organization. These are the types of vulnerabilities typically identified by Static Analysis Security Testing (SAST) tools like Checkmarx or HPE Fortify. The difference between Scenarios (2) and (3) is that in Scenario (2) the team developing the application “owns” or is responsible for all the code reflected in the data or control flow path associated with the vulnerability. Therefore, the application team is in a position to make the necessary changes to remediate the vulnerability.
In Scenario (3), however, we see the situation described above where the bottom end of the data or control flow ends up in code that is “owned” and maintained by another team within the organization. This leads to the remediation issues also described above.
So how can we identify these situations? In ThreadFix, we have access to static analysis results across the various teams and applications within the organization and across the various SAST technologies that might be in use. So – instead of looking at the SAST results for a specific application, we can compare across the entire enterprise and look for commonalities. What we find often looks like this:
Looking through these example data/control flow traces, what we see is that the last four entries are shared between these two vulnerabilities. Doing this type of analysis within the static results for an application can help highlight situations where there may be “chokepoints” that provide opportunities to remediate multiple vulnerabilities by making changes in one specific place. When you perform this analysis across an organization’s portfolio it allows us to identify similar situations where there is an opportunity for remediation leverage, but by looking at associated data such as file names, package names, directory names and the line of code contents, that allows us to potentially identify situations where multiple teams are making use of the same codebases to make up part of their application code.
So that is how HotSpot identifies potential situations where internal code reuse is leading to vulnerabilities spreading throughout an organization’s enterprise-wide codebase. On top of this we do some confidence-scoring and prioritization to make the raw data more valuable when driving remediation decisions, but at its core, this is the technique we’re exploiting.
Contact us to talk about how your organization can use HotSpot and other ThreadFix capabilities to increase the efficiency of your application security program and address more risk, faster.
]]>
We recently announced the release of ThreadFix 2.4 which includes our patent-pending HotSpot technology that identifies where internal teams are sharing code among themselves and where that code has vulnerabilities. Similar to what solutions like BlackDuck, Sonatype, and OWASP Dependency Check do for vulnerabilities in known open source components – but for code developed inside your organization. Let’s look at the Why and the How.
At a macro level, the ThreadFix platform lets organizations manage their application security programs. A huge part of running an effective application security program is making the hard decisions about what vulnerabilities to fix and when. You will never have enough resources to fix everything so you have to be smart about what you do fix. HotSpot helps give you better information about what vulnerabilities might be the most impactful to fix – because if you can fix a vulnerability developed by one team and then have that code make its way down to multiple consuming teams your code-level fix of one vulnerability will result in multiple vulnerabilities being addressed. Identifying opportunities for leverage like this can be critical as you struggle to make progress reducing your risk exposure from application-level vulnerabilities. Knowing that a specific vulnerability is repeated across applications isn’t the only factor that goes into a remediation decision, but having that knowledge provides valuable context.
More specifically, the idea for HotSpot came about the way most great features do – from working with customers addressing challenges in their environments. As part of our consulting practice, we do a lot of work with organizations helping them remediate vulnerabilities identified in their applications. A big part of this is helping organizations roll out static analysis programs (SAST in Gartner-speak). Typically when we help a team get their application put through its first static analysis run, we’ll sit down with team leaders to help characterize the results – working with them to prioritize the findings and provide insight on the most efficient ways to address the issues. What we noticed over time is that we had a lot of conversations that went something like this:
Security Analyst: This one is a very serious vulnerability and you’ll want to get it fixed as soon as possible.
Customer: We can’t fix that. We don’t own that code.
Security Analyst: Could you be more specific?
Customer: We get that code from the [XYZ] team. We can’t change it.
Security Analyst: So I guess we’re going to need to talk to the [XYZ] team…
After we had enough of these conversations we had the idea: Can we proactively identify situations where this type of inter-team code sharing is happening and where the code being shared is responsible for serious vulnerabilities? Because if we could then we could get out in front of these conversations, and, more importantly, we could start to map out how code was flowing within these organizations and identify risky situations where code with vulnerabilities was “infecting” additional applications.
Let’s look at how most enterprise applications are constructed as well as a couple of scenarios where you would find vulnerabilities in those applications:
Most modern enterprise applications don’t solely consist of code specifically written for that application. In fact, most the code probably consists of both open source and commercial libraries and frameworks. In a large enterprise, this code is likely augmented by code developed internal to the organization, but shared across teams and applications. Finally, applications have whatever code is specific to that application.
In Scenario (1) we are looking at a vulnerability where an attack attempting to exploit that vulnerability will enter the application in application-specific code, possibly flow through enterprise-specific components, but the vulnerability data or control flow stops somewhere in a generally-available commercial or open source library or framework. As mentioned above, these are the types of vulnerabilities that are identified by Component Lifecycle Management (CLM) solutions like BlackDuck, Sonatype, and OWASP Dependency Check.
In Scenarios (2) and (3) we are looking at vulnerabilities where an attack would enter the application via the application-specific code, but also where the attack data or control flow also ends within code developed within the organization. These are the types of vulnerabilities typically identified by Static Analysis Security Testing (SAST) tools like Checkmarx or HPE Fortify. The difference between Scenarios (2) and (3) is that in Scenario (2) the team developing the application “owns” or is responsible for all the code reflected in the data or control flow path associated with the vulnerability. Therefore, the application team is in a position to make the necessary changes to remediate the vulnerability.
In Scenario (3), however, we see the situation described above where the bottom end of the data or control flow ends up in code that is “owned” and maintained by another team within the organization. This leads to the remediation issues also described above.
So how can we identify these situations? In ThreadFix, we have access to static analysis results across the various teams and applications within the organization and across the various SAST technologies that might be in use. So – instead of looking at the SAST results for a specific application, we can compare across the entire enterprise and look for commonalities. What we find often looks like this:
Looking through these example data/control flow traces, what we see is that the last four entries are shared between these two vulnerabilities. Doing this type of analysis within the static results for an application can help highlight situations where there may be “chokepoints” that provide opportunities to remediate multiple vulnerabilities by making changes in one specific place. When you perform this analysis across an organization’s portfolio it allows us to identify similar situations where there is an opportunity for remediation leverage, but by looking at associated data such as file names, package names, directory names and the line of code contents, that allows us to potentially identify situations where multiple teams are making use of the same codebases to make up part of their application code.
So that is how HotSpot identifies potential situations where internal code reuse is leading to vulnerabilities spreading throughout an organization’s enterprise-wide codebase. On top of this we do some confidence-scoring and prioritization to make the raw data more valuable when driving remediation decisions, but at its core, this is the technique we’re exploiting.
Contact us to talk about how your organization can use HotSpot and other ThreadFix capabilities to increase the efficiency of your application security program and address more risk, faster.
]]>Each year across the country, right after Thanksgiving, a curious thing occurs at many technology vendors. Marketing professionals reach out to their company thought leaders to let them know that it’s time to produce a prediction report. Shortly thereafter, collective eyes are rolling and groans accompany candid statements, such as “I have nothing new or unique to add.” After some chiding and a twinge, most agree to a brainstorming session with marketing, nominating one or two peers to cobble together a list for public consumption.
Though positioned as unique findings, most are entirely based on anecdotal evidence. The process is anything but scientific. To get warmed up, we often use the magic of our favorite search engine of choice to find and consider predictions from the past. While many predictions are entirely self-serving, others are so blatantly apparent that anyone with a shred of technical knowlege could have discovered it.
So, in spite of all that I’ve laid out above, I’ll embark down the perilious path of passing on a few observations that I think could impact the security world in 2017. I won’t call them predictions, just some common observations that are interesting to us, and interesting enough that in certain cases, we’re pivoting our business to take advantage of these trends.
As a software security vendor, we regularly interact with a portfolio of unique clients, trade notes with competitors and partners at security shows, and interact with industry analysts and smart tech reports. We notice common denominators – clients asking the same types of questions, or reports curious about the same types of topics. So although we live in a world of anecdote, remember two acecdotes just might equal data. Below are some of the more interesting “observations” that you just may want to pay attention to in the new year.
The old adage that the best security folks are those that think like attackers applies. That sentiment will serve those well as attackers expand their targets to include smaller enterprises, or those in industries who thought they were immune to attack. There are two types of enterprises, thost that are targeted and those that are targets of opportunity. Targeted enterprises are typically ones whos’ core function will always attract attackers because of what they do or who they service – banks, financial institutions, retail, and government and military organizations. Everyone else must avoid being a target of opportunity, an enterprise that attackers might not normally focus on, but is ripe for attack because someone clicked on a malicious link and forgot to close a TCP port on the firewall. We see an increasing number of non-traditional enterprises being targeted because botnots and phishing attacks are so easily automated and executed. The challenge of the security professional is to recognize this pattern of activity and adapt to the new threat environment. If you are the lone security person in a smaller or non-targeted enterprise or industry, you will carry the burden because you know the consequences. As a result, you will have to do a better job of securing resources to protect your enterprise. If you are a security professional in a large, sophisticated financial instituation, you will have to extend your understanding to include business units to better prepare for fraud and account takeover activity. Here’s the bad news: your job just got harder and the good news: you get paid way more than you did five years ago.
There are several macro trends that are affecting the organization you work that will fundamentally change IT and your job. You may already be aware of several of these trends, and you may actually be on the receiving end of one or more of them. The first that we see occurring throughout organizations is the fragmentation of centralized IT. The days of the imperial CIO are coming to a close, as business units embed developers and other roles that used to reside soley in the IT organizations. This fragmentation is being accelerated by the transition to the cloud, with many non-IT leaders making cloud decisions without the awareness or approval of IT. We have witnessed one large bank who’s VP of Sales moved their CRM to Salesforce.com and informed IT after the fact. This happens more than you may think and points to the diminishing power of the CIO to prevent this happening. To make matters more confusing, organizations have become more project driven, changing staff and organizational structure as projects are stood up and torn down. This is reflective in the people organizations employ or contract to get the job done. Instead of full time equivalents (FTEs), organizations are likely to engage a spectrum of project stuff to include FTEs, temp-to-perm, long-term contactors, short-term staff augmentation, offshore, etc. Your challenge is the understand the organizational changes happening around you and adapt to provide sound security recommendations to the appropriate project and the right time. You will have to become even more savvy about the organization, keeping your ear to the ground and have informal contacts outside your group to pick up on macro changes that will affect your security team.
Unless you work for an organizations that doesn’t build software internally, you are likely aware of how DevOps and Agile are changing your world. Security, specifically application security, is not remotely close to being solved for the reason I’ve outlined above (sophistication of the threat and change within the organization). Yet, organizations are pushing to build software and deploy systems at a much faster tempo implementing the concepts outlined in Continuous Integration/Continuous Deployment (CI/CD). Competition and a variety of other compelling reasons are driving this, but understand that IT and security are both on the receiving end of this trend. The bad news is that you will have to understand your application development and deployment strategies better and get up to speed on CI/CD concepts and technologies. The good news is that you might have the opportunity to architect in application vulnerability testing into the CI/CD process that allows you to get upstream of many of the thorniest application vulnerabilities. Dan Cornell’s piece about CI/CD and security is a great starting place for understanding how to build in security in the CI/CD process. You will most certainly have to pick up news skillsets involving DevOps and Agile as organizations move towards a faster deployment schedule. The move provides certain opportunities and pitfalls that will likely determine how security is implemented in your organization.
These are four observations that we offer up that we see affecting our clients from the largest and most sophisticated to the smallest and less security savvy. Much of what we’ve observed is echoed by analyst like Gartner and will likely only increase momentum given the competitive pressures 2017 will bring. But then again, I could be completely wrong, and like the AV vendor predicting malware on mobile devices, deeply affected by my own experiences in the consulting trenches. Regardless, good luck in 2017, which will likely be even trickier than 2016!
From news reports on election hacking to the latest breach story, cybersecurity has do doubt gone mainstrain. I accept this fact, even if I refuse to call it “cyber” like most purists. Although cybersecurity has become more central to business, non-practioners still struggle to understand key concepts of the security world. I think cybersecurity will one day be a core skillset of most business managers, but until then, your role will continue to be looking for metaphors to explain technical security concepts in ways non-technical folks can understand them. I’ve found some of the best security pros out there could put technical security concepts in laymens – or business – terms, and do so with the greatest of ease. When executives ask “why don’t we just hack them back?” you will be called on to lay out why it’s not a great idea to do so, and do so in a convincing way. As a security professional you will become “Explainer-in-Chief” if you’re not that already. It’s great to be loved though…
]]>Each year across the country, right after Thanksgiving, a curious thing occurs at many technology vendors. Marketing professionals reach out to their company thought leaders to let them know that it’s time to produce a prediction report. Shortly thereafter, collective eyes are rolling and groans accompany candid statements, such as “I have nothing new or unique to add.” After some chiding and a twinge, most agree to a brainstorming session with marketing, nominating one or two peers to cobble together a list for public consumption.
Though positioned as unique findings, most are entirely based on anecdotal evidence. The process is anything but scientific. To get warmed up, we often use the magic of our favorite search engine of choice to find and consider predictions from the past. While many predictions are entirely self-serving, others are so blatantly apparent that anyone with a shred of technical knowlege could have discovered it.
So, in spite of all that I’ve laid out above, I’ll embark down the perilious path of passing on a few observations that I think could impact the security world in 2017. I won’t call them predictions, just some common observations that are interesting to us, and interesting enough that in certain cases, we’re pivoting our business to take advantage of these trends.
As a software security vendor, we regularly interact with a portfolio of unique clients, trade notes with competitors and partners at security shows, and interact with industry analysts and smart tech reports. We notice common denominators – clients asking the same types of questions, or reports curious about the same types of topics. So although we live in a world of anecdote, remember two acecdotes just might equal data. Below are some of the more interesting “observations” that you just may want to pay attention to in the new year.
The old adage that the best security folks are those that think like attackers applies. That sentiment will serve those well as attackers expand their targets to include smaller enterprises, or those in industries who thought they were immune to attack. There are two types of enterprises, thost that are targeted and those that are targets of opportunity. Targeted enterprises are typically ones whos’ core function will always attract attackers because of what they do or who they service – banks, financial institutions, retail, and government and military organizations. Everyone else must avoid being a target of opportunity, an enterprise that attackers might not normally focus on, but is ripe for attack because someone clicked on a malicious link and forgot to close a TCP port on the firewall. We see an increasing number of non-traditional enterprises being targeted because botnots and phishing attacks are so easily automated and executed. The challenge of the security professional is to recognize this pattern of activity and adapt to the new threat environment. If you are the lone security person in a smaller or non-targeted enterprise or industry, you will carry the burden because you know the consequences. As a result, you will have to do a better job of securing resources to protect your enterprise. If you are a security professional in a large, sophisticated financial instituation, you will have to extend your understanding to include business units to better prepare for fraud and account takeover activity. Here’s the bad news: your job just got harder and the good news: you get paid way more than you did five years ago.
There are several macro trends that are affecting the organization you work that will fundamentally change IT and your job. You may already be aware of several of these trends, and you may actually be on the receiving end of one or more of them. The first that we see occurring throughout organizations is the fragmentation of centralized IT. The days of the imperial CIO are coming to a close, as business units embed developers and other roles that used to reside soley in the IT organizations. This fragmentation is being accelerated by the transition to the cloud, with many non-IT leaders making cloud decisions without the awareness or approval of IT. We have witnessed one large bank who’s VP of Sales moved their CRM to Salesforce.com and informed IT after the fact. This happens more than you may think and points to the diminishing power of the CIO to prevent this happening. To make matters more confusing, organizations have become more project driven, changing staff and organizational structure as projects are stood up and torn down. This is reflective in the people organizations employ or contract to get the job done. Instead of full time equivalents (FTEs), organizations are likely to engage a spectrum of project stuff to include FTEs, temp-to-perm, long-term contactors, short-term staff augmentation, offshore, etc. Your challenge is the understand the organizational changes happening around you and adapt to provide sound security recommendations to the appropriate project and the right time. You will have to become even more savvy about the organization, keeping your ear to the ground and have informal contacts outside your group to pick up on macro changes that will affect your security team.
Unless you work for an organizations that doesn’t build software internally, you are likely aware of how DevOps and Agile are changing your world. Security, specifically application security, is not remotely close to being solved for the reason I’ve outlined above (sophistication of the threat and change within the organization). Yet, organizations are pushing to build software and deploy systems at a much faster tempo implementing the concepts outlined in Continuous Integration/Continuous Deployment (CI/CD). Competition and a variety of other compelling reasons are driving this, but understand that IT and security are both on the receiving end of this trend. The bad news is that you will have to understand your application development and deployment strategies better and get up to speed on CI/CD concepts and technologies. The good news is that you might have the opportunity to architect in application vulnerability testing into the CI/CD process that allows you to get upstream of many of the thorniest application vulnerabilities. Dan Cornell’s piece about CI/CD and security is a great starting place for understanding how to build in security in the CI/CD process. You will most certainly have to pick up news skillsets involving DevOps and Agile as organizations move towards a faster deployment schedule. The move provides certain opportunities and pitfalls that will likely determine how security is implemented in your organization.
These are four observations that we offer up that we see affecting our clients from the largest and most sophisticated to the smallest and less security savvy. Much of what we’ve observed is echoed by analyst like Gartner and will likely only increase momentum given the competitive pressures 2017 will bring. But then again, I could be completely wrong, and like the AV vendor predicting malware on mobile devices, deeply affected by my own experiences in the consulting trenches. Regardless, good luck in 2017, which will likely be even trickier than 2016!
From news reports on election hacking to the latest breach story, cybersecurity has do doubt gone mainstrain. I accept this fact, even if I refuse to call it “cyber” like most purists. Although cybersecurity has become more central to business, non-practioners still struggle to understand key concepts of the security world. I think cybersecurity will one day be a core skillset of most business managers, but until then, your role will continue to be looking for metaphors to explain technical security concepts in ways non-technical folks can understand them. I’ve found some of the best security pros out there could put technical security concepts in laymens – or business – terms, and do so with the greatest of ease. When executives ask “why don’t we just hack them back?” you will be called on to lay out why it’s not a great idea to do so, and do so in a convincing way. As a security professional you will become “Explainer-in-Chief” if you’re not that already. It’s great to be loved though…
]]>Businesses and development teams are rushing to embrace DevOps so they can be more agile, deploy code more quickly, and provide more value to their customers. Hallmarks of DevOps initiatives are support for significant automation, flexible provisioning, and cultural support for shared responsibilities. This often makes security teams uncomfortable, and they find themselves on the receiving end of this trend with little power to stop or even slow these changes. But the shift to DevOps does open a window of opportunity for security teams to exert influence and improve the security of applications.
Before considering what it means to have application security testing integrated into the DevOps Continuous Integration/Continuous Delivery (CI/CD) pipeline, it is worth asking why it is valuable to integrate application security testing into these pipelines in the first place. A fundamental tenet of DevOps and the reason for having CI/CD pipelines for software builds is to allow teams to have up-to-the-minute feedback on the status of their development efforts so that they know if a build is ready to push to production. This involves testing quality, performance and other characteristics of the system. And it should include security as well.
By integrating security into the CI/CD pipeline, security vulnerabilities are found quickly and reported to developers in the tools they’re already using. This removes friction from the remediation process. Instead of relying on an ornate change management process, security vulnerabilities are quickly reported as software bugs to be addressed – preferably by the developer who recently introduced them into the codebase. Security moves beyond something handled on a quarterly or annual basis to being just another check before developers can feel that they are “code complete” and move on to another task.
Conceptually, there is no reason why security testing should not be included alongside other CI/CD testing concerns. In practice, however, there are issues that can make integrating application security testing into CI/CD pipelines challenging. Many developers do have some knowledge of application security, but struggle with specifics. If a pipeline build fails due to unit tests or functional tests failing, developers can consult user stories or apply some common sense to identify and diagnose the issue. However, for developers without a strong background in secure coding, security issues identified during pipeline builds can be arcane and challenging to address.
In addition, most security tools are not well suited out of the box to be successfully integrated into CI/CD pipelines. They are built for use by security teams with expertise in application security and their results are meant to be consumed by those who have similar backgrounds. In addition, their run times can be long when viewed against the desire to rapidly approve builds for delivery. Many security tools are designed with the intention that they be exhaustive – identifying all risks so as not to miss minor details. That is not the best characteristic for security tools in a CI/CD pipeline. Also, most application security testing tools were originally intended to be run in an interactive mode by an analyst. Fortunately, many popular application security testing tools like OWASP ZAP are starting to expose APIs that help support the type of automation required for CI/CD integration.
So, what should the success criteria be as we look at application security testing within CI/CD pipelines? The first question to ask is “are we getting value from the testing we are doing?” This means determining if the development team is being notified of important vulnerabilities quickly after their introduction so they are easy to fix by the developer who only recently introduced them. It is also critical to make sure that the application security testing activities are not too expensive. Security testing gets expensive when:
All these issues require resources to address, and if the cost of application security testing is too great then it does not make sense for development teams to integrate this testing into their pipeline.
When looking to integrate an application security testing policy into a developer’s CI/CD pipeline, there are three phases that need to be specified:
The right policy for a given application will depend on several factors including the risk profile of the application in question and the risk tolerance level of the organization. Mission critical applications that manage valuable data subject to compliance requirements should be treated differently than less critical applications managing public data. In addition, different policies can be applied at different times – it may make sense to apply one policy on every check-in or on a nightly build, whereas another more stringent policy might be applied to a weekend build or a build that is run at the end of an iteration. Developing these policies is a collaborative effort between security teams and development teams.
Application security testing approaches for CI/CD pipelines are fundamentally different than the monolithic point-in-time testing approaches often practiced by security teams. For CI/CD integration, the focus must be on the optimizations needed to do security testing frequently, rather than the goal of exhaustive security testing. This requires a testing configuration that:
Application security testing in CI/CD pipelines also requires a mindset change away from one that tries to avoid ever passing a build that contains a vulnerability, to one focused on the “window of exposure” and “mean time to fix” for vulnerabilities. Teams can risk deploying something with vulnerabilities into production if they can correct identified issues quickly.
So, what does that mean for security teams integrating application security testing into CI/CD pipelines? First, teams must trim down rulesets to reduce false positives and reduce run times. The default behavior of most application security testing tools is to run an exhaustive set of rules geared at producing the most findings. This results in long run times and more false positives. Tuning CI/CD-based testing to only run high-confidence tests that are going to find the most important vulnerabilities reduces both testing run times as well as false positive rates. The focus for application security testing in CI/CD is on early identification of obvious and serious vulnerabilities and quick communication of these to the development team. This means that these issues can be addressed quickly and the build can be fixed. This focus on the vulnerabilities that are easy to identify with automation makes additional sense because those are the types of vulnerabilities that many attackers are also going to be able to identify and exploit using similar automation but on live environments.
A critical concern is determining how to run the tests run as fast as is reasonable. In general, there are a couple of ways to approach this. Controlling the rulesets to limit checks can help to reduce application security tool runtimes. In addition, doing differential or incremental scans can help to reduce the scope of the testing being performed, with associated time saved. For static application security testing (SAST), tests can be run only on the portions of the codebase that have been changed since the last round of testing. The ability to do this type of differential testing is typically vendor-dependent. Checkmarx, for example, provides this capability. For dynamic application security testing (DAST) you can have the scanner only look at new URLs or ones that have been modified based on the changes to the codebase since the last set of tests were run. See this presentation looking at attack surface calculations for more information on tracking application attack surface changes over time.
Synchronous tests are those that are started with the intention that they are completed in a reasonable amount of time, such that the results of these tests can be used to decide about whether to break the build. These tools or tests run through completion. Where possible, it is preferable to use synchronous tests because we can make go/no-go decisions based on the outcomes. But this requires that these tests be run in a short enough time window that they are not unduly holding up the completion of the build process.
Asynchronous testing tasks are those that are initiated as part of the CI/CD pipeline, but that are not expected to complete before a decision is made to “break the build.” It is simply a reality that for large applications or certain testing technologies testing will not complete within an acceptable time window.
The decision phase is where a go/no-go decision is made based on the results of the synchronous tests and where the build fails if the results of security testing are not satisfactory. Organizations would not go live with a build if it had serious quality errors based on the testing done by unit and functional tests, but many organizations will go live with security vulnerabilities in their applications. Teams do it every day and it is important to acknowledge that this is the current state of practice in the industry. Teams have to make a decision about security. A challenge with this decision is that it is less clear cut than one that would be made if functional test results were known to be deficient because in this case the team is approving a build that works but that will expose the organization to risks if it is deployed.
What criteria are used to make these risk decisions? First it is the severity and type of vulnerabilities identified. From a severity standpoint, automated scanners are going to assign severities to vulnerabilities and these severities can be used to approximate the riskiness of deploying the build currently being tested. A build is allowed a certain amount of perceived risk before it is considered unacceptable to pass. In addition, there can be value in examining the types of vulnerabilities identified. Certain types of vulnerabilities like SQL injection may be considered unacceptable for a build because of their potential impact, regardless of the scanner’s perceived severity of a specific vulnerability.
A valuable concept when implementing application security testing in CI/CD pipelines is the “newness” of vulnerabilities. In a perfect world, security teams could make policies such as “no critical or high vulnerabilities in production.” In the real world, and in dealing with applications that have been under development for a time without security testing, this may not be politically feasible. “No critical or high vulnerabilities” may not work, but “no new critical or high vulnerabilities” may be defensible. After all – the developers shouldn’t be introducing more vulnerabilities now that everyone agrees that they are a problem and there is testing in place. In many situations, this is a more acceptable approach. As we have seen above, when integrating application security testing into CI/CD pipelines, pragmatism is a primary driver.
Unlike the results of most security testing, development teams are the direct consumer. So, the development teams must be able to consume the results of this reporting without intervention from the security team. This means that outputs from the testing need to be delivered to the tools the development team is using for managing bugs. Teams have often made a significant investment in both deploying tools and crafting processes. Any security testing done in the CI/CD pipeline needs to have its result slipstreamed into these systems and processes to be actionable. Otherwise security just slowed down the build process to serve their own needs. Historical vulnerabilities must be tracked so they are only reported to developers once, and because testing is being run frequently on incremental code changes, the count of new vulnerabilities identified per run should be small. Finally, reporting needs to package the vulnerabilities in the way that is going to be most useful and most consumable by the development teams. This means providing appropriate context as well as supporting materials so that developers can self-serve the information they need to fix the issues.
There are several common strategies for bundling vulnerabilities into software defects:
Bundling vulnerabilities by type makes sense in many cases because the code-level changes for remediation are often the same. They use the same encoding function, same coding pattern, etc. Developers can fix many vulnerabilities quickly if they are making the same kind of changes to code.
Bundling by code location makes sense when one developer is responsible for a specific part of the codebase, and perhaps they are the only one who can easily maintain that part of the codebase. From an agile standpoint, this might not be ideal, but it does reflect the reality of many development teams.
Bundling by severity makes sense in situations where the application has its security “under control” – i.e. it has been cleared of major vulnerabilities. In cases like this, bundling by severity after a particularly bad check-in may make sense. This highlights all the new important vulnerabilities and allows a developer to go in and address the new issues that have been added.
For application security testing in CI/CD pipelines to be successful, there must be onboarding and maintenance processes in place. Onboarding an application for CI/CD involves running an initial scan with the target ruleset, culling out false positives, and possibly further tuning the ruleset. This is typically done by the application security team because it often requires a lot of skill with the testing tools. The onboarding process is also a great opportunity for the security team to learn more about the development team and the specific characteristics of the application being brought under management.
Maintaining the testing policies over time is also required. There must be a process in place for development and security teams to flag false positives and return builds to passing status. Over time, analysis of these false positive reports can provide data on how to either alter the overall testing policy or to further evolve the rules being used for testing. This feedback loop allows the security and development teams to work together to further the goals of application security testing in CI/CD pipelines: find important vulnerabilities quickly and report them to development teams for resolution without slowing the process down with a lot of false positive “noise.”
There are many benefits to incorporating application security testing in developers’ CI/CD pipelines. This testing allows development teams to be informed quickly about serious security vulnerabilities that have been introduced to their codebase so that those vulnerabilities can be fixed. It also gives development teams confidence that they are ready for “continuous delivery” because security aspects of code correctness have been addressed along with more traditional functional aspects. However, to successfully introduce application security testing to CI/CD pipelines, security teams must accept some risk and make concessions involving the depth and breadth of testing with the belief that shallow testing done more frequently provides value. Understanding political tradeoffs within the organization as well as understanding how to best tune application security testing tools to meet these somewhat esoteric goals will allow security managers to reduce risk via tighter integration with development team efforts.
We have been doing quite a bit of working helping organizations integrate application security testing into their CI/CD pipelines and we are going to be distilling a lot of those experiences into ThreadFix to make it even easier for teams to reap the benefits. Contact us if you would like to know more about staying secure during your transition to DevOps.
Several folks looked at drafts of this blog post and provided feedback. Any good ideas are likely stolen from them and any bad ones that remain are my own. Thanks to Bryan Beverly, John Dickson, Cap Diebel, Matt Konda, Greg Leeds, Andrew Montz, Kyle Pippin, David Rook, Matt Snider, and Ben Tomhave.
]]>Businesses and development teams are rushing to embrace DevOps so they can be more agile, deploy code more quickly, and provide more value to their customers. Hallmarks of DevOps initiatives are support for significant automation, flexible provisioning, and cultural support for shared responsibilities. This often makes security teams uncomfortable, and they find themselves on the receiving end of this trend with little power to stop or even slow these changes. But the shift to DevOps does open a window of opportunity for security teams to exert influence and improve the security of applications.
Before considering what it means to have application security testing integrated into the DevOps Continuous Integration/Continuous Delivery (CI/CD) pipeline, it is worth asking why it is valuable to integrate application security testing into these pipelines in the first place. A fundamental tenet of DevOps and the reason for having CI/CD pipelines for software builds is to allow teams to have up-to-the-minute feedback on the status of their development efforts so that they know if a build is ready to push to production. This involves testing quality, performance and other characteristics of the system. And it should include security as well.
By integrating security into the CI/CD pipeline, security vulnerabilities are found quickly and reported to developers in the tools they’re already using. This removes friction from the remediation process. Instead of relying on an ornate change management process, security vulnerabilities are quickly reported as software bugs to be addressed – preferably by the developer who recently introduced them into the codebase. Security moves beyond something handled on a quarterly or annual basis to being just another check before developers can feel that they are “code complete” and move on to another task.
Conceptually, there is no reason why security testing should not be included alongside other CI/CD testing concerns. In practice, however, there are issues that can make integrating application security testing into CI/CD pipelines challenging. Many developers do have some knowledge of application security, but struggle with specifics. If a pipeline build fails due to unit tests or functional tests failing, developers can consult user stories or apply some common sense to identify and diagnose the issue. However, for developers without a strong background in secure coding, security issues identified during pipeline builds can be arcane and challenging to address.
In addition, most security tools are not well suited out of the box to be successfully integrated into CI/CD pipelines. They are built for use by security teams with expertise in application security and their results are meant to be consumed by those who have similar backgrounds. In addition, their run times can be long when viewed against the desire to rapidly approve builds for delivery. Many security tools are designed with the intention that they be exhaustive – identifying all risks so as not to miss minor details. That is not the best characteristic for security tools in a CI/CD pipeline. Also, most application security testing tools were originally intended to be run in an interactive mode by an analyst. Fortunately, many popular application security testing tools like OWASP ZAP are starting to expose APIs that help support the type of automation required for CI/CD integration.
So, what should the success criteria be as we look at application security testing within CI/CD pipelines? The first question to ask is “are we getting value from the testing we are doing?” This means determining if the development team is being notified of important vulnerabilities quickly after their introduction so they are easy to fix by the developer who only recently introduced them. It is also critical to make sure that the application security testing activities are not too expensive. Security testing gets expensive when:
All these issues require resources to address, and if the cost of application security testing is too great then it does not make sense for development teams to integrate this testing into their pipeline.
When looking to integrate an application security testing policy into a developer’s CI/CD pipeline, there are three phases that need to be specified:
The right policy for a given application will depend on several factors including the risk profile of the application in question and the risk tolerance level of the organization. Mission critical applications that manage valuable data subject to compliance requirements should be treated differently than less critical applications managing public data. In addition, different policies can be applied at different times – it may make sense to apply one policy on every check-in or on a nightly build, whereas another more stringent policy might be applied to a weekend build or a build that is run at the end of an iteration. Developing these policies is a collaborative effort between security teams and development teams.
Application security testing approaches for CI/CD pipelines are fundamentally different than the monolithic point-in-time testing approaches often practiced by security teams. For CI/CD integration, the focus must be on the optimizations needed to do security testing frequently, rather than the goal of exhaustive security testing. This requires a testing configuration that:
Application security testing in CI/CD pipelines also requires a mindset change away from one that tries to avoid ever passing a build that contains a vulnerability, to one focused on the “window of exposure” and “mean time to fix” for vulnerabilities. Teams can risk deploying something with vulnerabilities into production if they can correct identified issues quickly.
So, what does that mean for security teams integrating application security testing into CI/CD pipelines? First, teams must trim down rulesets to reduce false positives and reduce run times. The default behavior of most application security testing tools is to run an exhaustive set of rules geared at producing the most findings. This results in long run times and more false positives. Tuning CI/CD-based testing to only run high-confidence tests that are going to find the most important vulnerabilities reduces both testing run times as well as false positive rates. The focus for application security testing in CI/CD is on early identification of obvious and serious vulnerabilities and quick communication of these to the development team. This means that these issues can be addressed quickly and the build can be fixed. This focus on the vulnerabilities that are easy to identify with automation makes additional sense because those are the types of vulnerabilities that many attackers are also going to be able to identify and exploit using similar automation but on live environments.
A critical concern is determining how to run the tests run as fast as is reasonable. In general, there are a couple of ways to approach this. Controlling the rulesets to limit checks can help to reduce application security tool runtimes. In addition, doing differential or incremental scans can help to reduce the scope of the testing being performed, with associated time saved. For static application security testing (SAST), tests can be run only on the portions of the codebase that have been changed since the last round of testing. The ability to do this type of differential testing is typically vendor-dependent. Checkmarx, for example, provides this capability. For dynamic application security testing (DAST) you can have the scanner only look at new URLs or ones that have been modified based on the changes to the codebase since the last set of tests were run. See this presentation looking at attack surface calculations for more information on tracking application attack surface changes over time.
Synchronous tests are those that are started with the intention that they are completed in a reasonable amount of time, such that the results of these tests can be used to decide about whether to break the build. These tools or tests run through completion. Where possible, it is preferable to use synchronous tests because we can make go/no-go decisions based on the outcomes. But this requires that these tests be run in a short enough time window that they are not unduly holding up the completion of the build process.
Asynchronous testing tasks are those that are initiated as part of the CI/CD pipeline, but that are not expected to complete before a decision is made to “break the build.” It is simply a reality that for large applications or certain testing technologies testing will not complete within an acceptable time window.
The decision phase is where a go/no-go decision is made based on the results of the synchronous tests and where the build fails if the results of security testing are not satisfactory. Organizations would not go live with a build if it had serious quality errors based on the testing done by unit and functional tests, but many organizations will go live with security vulnerabilities in their applications. Teams do it every day and it is important to acknowledge that this is the current state of practice in the industry. Teams have to make a decision about security. A challenge with this decision is that it is less clear cut than one that would be made if functional test results were known to be deficient because in this case the team is approving a build that works but that will expose the organization to risks if it is deployed.
What criteria are used to make these risk decisions? First it is the severity and type of vulnerabilities identified. From a severity standpoint, automated scanners are going to assign severities to vulnerabilities and these severities can be used to approximate the riskiness of deploying the build currently being tested. A build is allowed a certain amount of perceived risk before it is considered unacceptable to pass. In addition, there can be value in examining the types of vulnerabilities identified. Certain types of vulnerabilities like SQL injection may be considered unacceptable for a build because of their potential impact, regardless of the scanner’s perceived severity of a specific vulnerability.
A valuable concept when implementing application security testing in CI/CD pipelines is the “newness” of vulnerabilities. In a perfect world, security teams could make policies such as “no critical or high vulnerabilities in production.” In the real world, and in dealing with applications that have been under development for a time without security testing, this may not be politically feasible. “No critical or high vulnerabilities” may not work, but “no new critical or high vulnerabilities” may be defensible. After all – the developers shouldn’t be introducing more vulnerabilities now that everyone agrees that they are a problem and there is testing in place. In many situations, this is a more acceptable approach. As we have seen above, when integrating application security testing into CI/CD pipelines, pragmatism is a primary driver.
Unlike the results of most security testing, development teams are the direct consumer. So, the development teams must be able to consume the results of this reporting without intervention from the security team. This means that outputs from the testing need to be delivered to the tools the development team is using for managing bugs. Teams have often made a significant investment in both deploying tools and crafting processes. Any security testing done in the CI/CD pipeline needs to have its result slipstreamed into these systems and processes to be actionable. Otherwise security just slowed down the build process to serve their own needs. Historical vulnerabilities must be tracked so they are only reported to developers once, and because testing is being run frequently on incremental code changes, the count of new vulnerabilities identified per run should be small. Finally, reporting needs to package the vulnerabilities in the way that is going to be most useful and most consumable by the development teams. This means providing appropriate context as well as supporting materials so that developers can self-serve the information they need to fix the issues.
There are several common strategies for bundling vulnerabilities into software defects:
Bundling vulnerabilities by type makes sense in many cases because the code-level changes for remediation are often the same. They use the same encoding function, same coding pattern, etc. Developers can fix many vulnerabilities quickly if they are making the same kind of changes to code.
Bundling by code location makes sense when one developer is responsible for a specific part of the codebase, and perhaps they are the only one who can easily maintain that part of the codebase. From an agile standpoint, this might not be ideal, but it does reflect the reality of many development teams.
Bundling by severity makes sense in situations where the application has its security “under control” – i.e. it has been cleared of major vulnerabilities. In cases like this, bundling by severity after a particularly bad check-in may make sense. This highlights all the new important vulnerabilities and allows a developer to go in and address the new issues that have been added.
For application security testing in CI/CD pipelines to be successful, there must be onboarding and maintenance processes in place. Onboarding an application for CI/CD involves running an initial scan with the target ruleset, culling out false positives, and possibly further tuning the ruleset. This is typically done by the application security team because it often requires a lot of skill with the testing tools. The onboarding process is also a great opportunity for the security team to learn more about the development team and the specific characteristics of the application being brought under management.
Maintaining the testing policies over time is also required. There must be a process in place for development and security teams to flag false positives and return builds to passing status. Over time, analysis of these false positive reports can provide data on how to either alter the overall testing policy or to further evolve the rules being used for testing. This feedback loop allows the security and development teams to work together to further the goals of application security testing in CI/CD pipelines: find important vulnerabilities quickly and report them to development teams for resolution without slowing the process down with a lot of false positive “noise.”
There are many benefits to incorporating application security testing in developers’ CI/CD pipelines. This testing allows development teams to be informed quickly about serious security vulnerabilities that have been introduced to their codebase so that those vulnerabilities can be fixed. It also gives development teams confidence that they are ready for “continuous delivery” because security aspects of code correctness have been addressed along with more traditional functional aspects. However, to successfully introduce application security testing to CI/CD pipelines, security teams must accept some risk and make concessions involving the depth and breadth of testing with the belief that shallow testing done more frequently provides value. Understanding political tradeoffs within the organization as well as understanding how to best tune application security testing tools to meet these somewhat esoteric goals will allow security managers to reduce risk via tighter integration with development team efforts.
We have been doing quite a bit of working helping organizations integrate application security testing into their CI/CD pipelines and we are going to be distilling a lot of those experiences into ThreadFix to make it even easier for teams to reap the benefits. Contact us if you would like to know more about staying secure during your transition to DevOps.
Several folks looked at drafts of this blog post and provided feedback. Any good ideas are likely stolen from them and any bad ones that remain are my own. Thanks to Bryan Beverly, John Dickson, Cap Diebel, Matt Konda, Greg Leeds, Andrew Montz, Kyle Pippin, David Rook, Matt Snider, and Ben Tomhave.
]]>If you’re lucky enough to work at a retail company, the next several weeks of holiday shopping may be the difference between a financially successful or unsuccessful year. As buyers, we’re all too familiar with the holiday shopping season, regardless of whether we either choose to buy our gifts from Amazon and other online retailers, or brave the traffic and crowds for a more hands-on experience. That said, you may be less familiar with what goes on behind the scenes at retailers who are looking to capitalize on the extraordinary human phenomenon of ‘the holidays’.
If you’re lucky enough to be a security leader working at a retail company, the odds are that you approach the next several weeks with more than a bit of fear and apprehension. In all likelihood, the technology department has already initiated its annual “holiday freeze” – the lockdown of systems and new application functionality that provides a stable environment to capitalize on the surge of business.
While Black Friday, Cyber Monday, and the holiday shopping season in general may bump your heart rate up a notch or two, the success of the two month selling sprint is increasingly on the shoulders of Information Technology to deliver and Information Security to protect. While everyone else is gorging themselves on turkey, football, and shopping, retail security professionals are the busiest they will be the entire year.
Given that Black Friday is just over a week from now, I’ve put together a list of 11th hour tasks that can be knocked out in a short period of time. This list includes observations of some of the best practices of Denim Group’s retail clients over the years and has a distinct web and application focus (which reflects the large body of work that Denim Group has accumulated in these areas). Some of these tasks will probably be “no-brainers” for many, however I suspect you will find at least a few that you are not focusing on – but should. Given that Black Friday is just over a week from now, this list compiles tasks that can be knocked out in a short period of time. That said, tasks, such as measuring the security risk of suppliers, are simply too big to take on this late in the game.
With that in mind, we’ve come up with the following “Top 9” list. Why 9 you might ask? As a tip of the hat to geeks that are Python fans, I’ll invoke and misquote the cleric character played by Monty Python’s Michael Palin in Monty Python and the Holy Grail,
“Then shalt thou count to nine, no more, no less. Nine shall be the number thou shalt count, and the number of the counting shall be nine. Ten shalt thou not count, neither count thou eight, excepting that thou then proceed to nine. Eleven is right out. Once the number nine, being the ninth number, be reached, then lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who being naughty in My sight, shall snuff it.”
Here are our Top 9! Enjoy…
This list is by no means exhaustive and it will be admittedly difficult to tackle all of these tasks prior to Black Friday. However, we hope that if you’re in retail security, you picked up one or two ideas from the above that you hadn’t thought of and that helps you step up your security game during this busiest of seasons.
Happy Holidays and see you in January!
]]>If you’re lucky enough to work at a retail company, the next several weeks of holiday shopping may be the difference between a financially successful or unsuccessful year. As buyers, we’re all too familiar with the holiday shopping season, regardless of whether we either choose to buy our gifts from Amazon and other online retailers, or brave the traffic and crowds for a more hands-on experience. That said, you may be less familiar with what goes on behind the scenes at retailers who are looking to capitalize on the extraordinary human phenomenon of ‘the holidays’.
If you’re lucky enough to be a security leader working at a retail company, the odds are that you approach the next several weeks with more than a bit of fear and apprehension. In all likelihood, the technology department has already initiated its annual “holiday freeze” – the lockdown of systems and new application functionality that provides a stable environment to capitalize on the surge of business.
While Black Friday, Cyber Monday, and the holiday shopping season in general may bump your heart rate up a notch or two, the success of the two month selling sprint is increasingly on the shoulders of Information Technology to deliver and Information Security to protect. While everyone else is gorging themselves on turkey, football, and shopping, retail security professionals are the busiest they will be the entire year.
Given that Black Friday is just over a week from now, I’ve put together a list of 11th hour tasks that can be knocked out in a short period of time. This list includes observations of some of the best practices of Denim Group’s retail clients over the years and has a distinct web and application focus (which reflects the large body of work that Denim Group has accumulated in these areas). Some of these tasks will probably be “no-brainers” for many, however I suspect you will find at least a few that you are not focusing on – but should. Given that Black Friday is just over a week from now, this list compiles tasks that can be knocked out in a short period of time. That said, tasks, such as measuring the security risk of suppliers, are simply too big to take on this late in the game.
With that in mind, we’ve come up with the following “Top 9” list. Why 9 you might ask? As a tip of the hat to geeks that are Python fans, I’ll invoke and misquote the cleric character played by Monty Python’s Michael Palin in Monty Python and the Holy Grail,
“Then shalt thou count to nine, no more, no less. Nine shall be the number thou shalt count, and the number of the counting shall be nine. Ten shalt thou not count, neither count thou eight, excepting that thou then proceed to nine. Eleven is right out. Once the number nine, being the ninth number, be reached, then lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who being naughty in My sight, shall snuff it.”
Here are our Top 9! Enjoy…
This list is by no means exhaustive and it will be admittedly difficult to tackle all of these tasks prior to Black Friday. However, we hope that if you’re in retail security, you picked up one or two ideas from the above that you hadn’t thought of and that helps you step up your security game during this busiest of seasons.
Happy Holidays and see you in January!
]]>Over the summer, I had the opportunity to present at the RSA Asia Pacific & Japan Conference on the topic of DevOps and security. In the last 6-12 months, and especially in the time since submitting this topic, we’ve seen the accelerated rise of DevOps. The challenge is that we haven’t solved the problem of security of software, and now we’re going a million miles an hour. There’s inherent risk in this fail fast mentality with regards to security.
The number one credo in the industry today is the push to shortening time to market at the expense of almost everything else. With that in mind, can security remain relevant?
Given this trend to move quicker, the key issues outlined in my presentation included:
For more information, watch my interview with Editor in Chief of RSA Conference Jennifer Lawinski below and view the slide deck from my presentation.
]]>Over the summer, I had the opportunity to present at the RSA Asia Pacific & Japan Conference on the topic of DevOps and security. In the last 6-12 months, and especially in the time since submitting this topic, we’ve seen the accelerated rise of DevOps. The challenge is that we haven’t solved the problem of security of software, and now we’re going a million miles an hour. There’s inherent risk in this fail fast mentality with regards to security.
The number one credo in the industry today is the push to shortening time to market at the expense of almost everything else. With that in mind, can security remain relevant?
Given this trend to move quicker, the key issues outlined in my presentation included:
For more information, watch my interview with Editor in Chief of RSA Conference Jennifer Lawinski below and view the slide deck from my presentation.
]]>Ahhhhh. BlackHat Eve. That week before Black Hat where overworked security folks all over the world attempt to clear out their email inboxes prior to jetting out to Las Vegas for a week in enclosed conference centers with thousands of other like-minded security nerds. But when we talk about Black Hat as a singular event – a monolithic entity – that is a misnomer. Really, what I’m talking about are the three organized conferences that take place almost simultaneously: Black Hat USA 2016, DefCon 24, and B-Sides at Mandalay Bay, Paris, and Tuscany Suites & Casino respectively. Throw in the countless vendor parties, press events, and good old fashioned meet ups that occur during the week of August 2-6, 2016 in Las Vegas and you have more “stuff” than any normal human can consume. What this week has become is the largest aggregation of security pros, hackers, wannabes and newbies who use the word “cyber” as a standalone noun at their own expense.
So what do we have to look forward to? Aside from a week of dehydration, fallen arches, and inevitable hangovers…
There are a multitude of sessions at the three formal conferences to choose from. How does a reasonable person make a choice of what to hit in a week in Vegas given the limits of time and geography? Bree Fowler, of the Associated Press, posed the question in New York City earlier this June, and I had no real answer. What follows is my feeble dissection of a list that is too big to curate. What are likely to be the tasty sessions based purely on the pre-conference hype and well-written conference abstracts? What will likely play out next week at one of the largest security conferences in the world? Here we go!
Tasty Sessions
Yes, picking cool sessions is largely a hit or miss activity based upon pre-conference buzz and appealing abstracts. As next week draws closer, the realities of time, space, and geography kick in and some serious choices on what to attend and what not to attend come in to play. As a hardened security guy this is an unscientific list of what I want to see. I hope that one or two might be worth penciling in to your itinerary too.
Dan Kaminsky, The Hidden Architecture of Our Time: Why This Internet Worked, How We Could Lose It, and the Role Hackers Play, August 3, 9:00 – 10:00 am.
Dan Kaminsky’s keynote is likely a top 5 “can’t miss” session for the week. He might even have one or two surprises up his sleeve – he usually does. The world is changing, and the Internet needs to change with it too. Dan will tackle the role of government in this change. No doubt big picture stuff, but we that for starting off Black Hat on a strong note.
Bryant Zadegan and Ryan Lester, Abusing Bleeding Edge Web Standards for AppSec Glory, August 3, 10:20 – 11:10 am.
Web applications remain a primary attack vector in spite the fact they have been so for nearly a decade, according to analysts like Gartner. Given how fast organizations are moving to implement DevOps, application security will become even trickier. The latest on how to play appsec whack-a-mole should be interesting and Bryant and Ryan are really smart guys.
Zinaida Benenson, Exploiting Curiosity and Context: How to Make People Click on a Dangerous Link Despite Their Security Awareness, August 3, 11:30 am – 12:20 pm.
Phishing remains a top attack vector attacking layer 8 (humans). I have no doubt that new and unusual ways to dupe users will be revealed in this session. Although this is a well-trodden area, phishing seems to evolve and mutate. This session will be well worth hitting to hear details on the latest evil.
Jeff Melrose, Drone Attacks on Industrial Wireless: A New Front in Cyber Security, August 3, 1:50 – 2:40 pm.
Drones – heck yeah! You can will one of the numerous giveaway drones from the expo floor and put it right to work after Black Hat. Seriously, as an ex-Air Force guy this is right up my alley and will no doubt be a mind bender and departure from the standard vulnerability talks.
Peleus Uhley, Design Approaches for Security Automation, August 3, 4:20 – 5:10 pm.
I don’t have to tell you that security automation is the way of the world. If you’re a security person stuck in the bowels of bigcompany.com and trying to dance with the DevOps team, this will be worth hitting to up your automation IQ. It is where the world will end up in the not-too distant future.
Kenneth Geers, Cyber War in Perspective: Analysis from the Crisis in Ukraine,
August 3, 5:30 – 6:00 pm.
OK, a deeper analysis of the Russian (?) attack on the Ukrainian power grid is probably worth hearing. Although the potential for chicken little sky is falling buzzword overload might be present, I think the case study of what happened in the Ukraine is important for all to understand in this age where attacks have morphed from defacements and data loss to out-for-count downtime.
Jack Daniel, Hire Ground, August 2, 11:00 – 11:30 am.
Jack Daniel is a security community institution, the heart and soul of B-Sides, and a must meet if you haven’t. This session is likely going to be a great way to kick off B-Sides once you make it to the Tuscany Suites. One burning question for B-SidesLV 2016 – Can Jack out-do his all-Denim suit from last year?
Unfortunately, at the same time is another one:
Wendy Nather and Dean Webb, Network Access Control: The Company-Wide Team Building Exercise That Only You Know About, August 2, 11:00 – 11:30 am.
Wendy is another security community institution – former CISO and industry analyst, and current security expert at the Research Director at Retail Cyber Intelligence Sharing Center (R-CISC). Wendy is a great speaker – I like the topic, but that’s almost inconsequential as I’d recommend attending a Wendy session regardless of the topic. Key question for 2016 – What color of hair will Wendy have this year?
Chris Eng and Wendy Everette, Security Vulnerabilities, the Current State of Consumer Protection Law, & how IOT Might Change It, August 2, 2:30 – 3:00 pm.
A meaty topic that touches IoT and consumer protection laws – unfortunately unchartered territory for government, regulatory agencies, and the security industry. I’ve been on the speaker’s circuit with Veracode veterans Chris Eng and Chris Wysopal for a time, and have no doubt Chris Eng will push us to think about the coming privacy concerns that IoT will represent for all of us as consumers.
Andrew Morris, Flaying out the Blockchain Ledger for Fun, Profit, and Hip Hop, August 2, 2:00 – 2:55 pm.
And
Rod Soto & Joseph Zadeh, No Silver Bullet. Multi contextual threat detection via Machine Learning, August 3, 10:35 – 11:30 am.
Blockchains and how they might be used to build trust models and secure things is a hot topic in security circles. Machine learning is no different and is a potential game changer for the industry, making this session worthy of attendance. If you can’t make these, make sure to catch at least one other on blockchains and machine learning because they will likely have a huge effect on what we do.
Matteo Beccaro and Matteo Collura, (Ab)using Smart Cities: The Dark Age of Modern Mobility, August 4, 1:00 pm.
With everything connected, the doomsday scenarios of shutting down a city becomes less and less science fiction and more and more someone’s problem to solve. This session will either get you thinking or make you buy that small home in the country off the grid. Should be fun.
Evan Booth, Jittery MacGyver: Lessons Learned from Building a Bionic Hand out of a Coffee Maker, August 6, 11:00 am.
From a pure curiosity standpoint, this session might be worth attending. You’ll never look at that office coffee maker the same way again either way.
Fred Bret-Mounet, All Your Solar Panels Are Belong to Me, August 6, 4:30 pm.
Oh my! Last year it was guns, this year solar arrays.
I can’t even begin to think of the many bad things that can happen from someone taking over an entire solar array, but I guess we’re going to find out. This will bring an entirely new take on renewable energies – you can new renew your root access credentials conveniently, courtesy of the manufacturer.
As you get a sense, there are hundreds of great sessions next week. None of us will do justice to all of them, but perhaps between physical attendance and social media we won’t miss the Jeep hacking equivalent of 2016. Good luck. We’ll see you out there next week. @johnbdickson.
]]>
Ahhhhh. BlackHat Eve. That week before Black Hat where overworked security folks all over the world attempt to clear out their email inboxes prior to jetting out to Las Vegas for a week in enclosed conference centers with thousands of other like-minded security nerds. But when we talk about Black Hat as a singular event – a monolithic entity – that is a misnomer. Really, what I’m talking about are the three organized conferences that take place almost simultaneously: Black Hat USA 2016, DefCon 24, and B-Sides at Mandalay Bay, Paris, and Tuscany Suites & Casino respectively. Throw in the countless vendor parties, press events, and good old fashioned meet ups that occur during the week of August 2-6, 2016 in Las Vegas and you have more “stuff” than any normal human can consume. What this week has become is the largest aggregation of security pros, hackers, wannabes and newbies who use the word “cyber” as a standalone noun at their own expense.
So what do we have to look forward to? Aside from a week of dehydration, fallen arches, and inevitable hangovers…
There are a multitude of sessions at the three formal conferences to choose from. How does a reasonable person make a choice of what to hit in a week in Vegas given the limits of time and geography? Bree Fowler, of the Associated Press, posed the question in New York City earlier this June, and I had no real answer. What follows is my feeble dissection of a list that is too big to curate. What are likely to be the tasty sessions based purely on the pre-conference hype and well-written conference abstracts? What will likely play out next week at one of the largest security conferences in the world? Here we go!
Tasty Sessions
Yes, picking cool sessions is largely a hit or miss activity based upon pre-conference buzz and appealing abstracts. As next week draws closer, the realities of time, space, and geography kick in and some serious choices on what to attend and what not to attend come in to play. As a hardened security guy this is an unscientific list of what I want to see. I hope that one or two might be worth penciling in to your itinerary too.
Dan Kaminsky, The Hidden Architecture of Our Time: Why This Internet Worked, How We Could Lose It, and the Role Hackers Play, August 3, 9:00 – 10:00 am.
Dan Kaminsky’s keynote is likely a top 5 “can’t miss” session for the week. He might even have one or two surprises up his sleeve – he usually does. The world is changing, and the Internet needs to change with it too. Dan will tackle the role of government in this change. No doubt big picture stuff, but we that for starting off Black Hat on a strong note.
Bryant Zadegan and Ryan Lester, Abusing Bleeding Edge Web Standards for AppSec Glory, August 3, 10:20 – 11:10 am.
Web applications remain a primary attack vector in spite the fact they have been so for nearly a decade, according to analysts like Gartner. Given how fast organizations are moving to implement DevOps, application security will become even trickier. The latest on how to play appsec whack-a-mole should be interesting and Bryant and Ryan are really smart guys.
Zinaida Benenson, Exploiting Curiosity and Context: How to Make People Click on a Dangerous Link Despite Their Security Awareness, August 3, 11:30 am – 12:20 pm.
Phishing remains a top attack vector attacking layer 8 (humans). I have no doubt that new and unusual ways to dupe users will be revealed in this session. Although this is a well-trodden area, phishing seems to evolve and mutate. This session will be well worth hitting to hear details on the latest evil.
Jeff Melrose, Drone Attacks on Industrial Wireless: A New Front in Cyber Security, August 3, 1:50 – 2:40 pm.
Drones – heck yeah! You can will one of the numerous giveaway drones from the expo floor and put it right to work after Black Hat. Seriously, as an ex-Air Force guy this is right up my alley and will no doubt be a mind bender and departure from the standard vulnerability talks.
Peleus Uhley, Design Approaches for Security Automation, August 3, 4:20 – 5:10 pm.
I don’t have to tell you that security automation is the way of the world. If you’re a security person stuck in the bowels of bigcompany.com and trying to dance with the DevOps team, this will be worth hitting to up your automation IQ. It is where the world will end up in the not-too distant future.
Kenneth Geers, Cyber War in Perspective: Analysis from the Crisis in Ukraine,
August 3, 5:30 – 6:00 pm.
OK, a deeper analysis of the Russian (?) attack on the Ukrainian power grid is probably worth hearing. Although the potential for chicken little sky is falling buzzword overload might be present, I think the case study of what happened in the Ukraine is important for all to understand in this age where attacks have morphed from defacements and data loss to out-for-count downtime.
Jack Daniel, Hire Ground, August 2, 11:00 – 11:30 am.
Jack Daniel is a security community institution, the heart and soul of B-Sides, and a must meet if you haven’t. This session is likely going to be a great way to kick off B-Sides once you make it to the Tuscany Suites. One burning question for B-SidesLV 2016 – Can Jack out-do his all-Denim suit from last year?
Unfortunately, at the same time is another one:
Wendy Nather and Dean Webb, Network Access Control: The Company-Wide Team Building Exercise That Only You Know About, August 2, 11:00 – 11:30 am.
Wendy is another security community institution – former CISO and industry analyst, and current security expert at the Research Director at Retail Cyber Intelligence Sharing Center (R-CISC). Wendy is a great speaker – I like the topic, but that’s almost inconsequential as I’d recommend attending a Wendy session regardless of the topic. Key question for 2016 – What color of hair will Wendy have this year?
Chris Eng and Wendy Everette, Security Vulnerabilities, the Current State of Consumer Protection Law, & how IOT Might Change It, August 2, 2:30 – 3:00 pm.
A meaty topic that touches IoT and consumer protection laws – unfortunately unchartered territory for government, regulatory agencies, and the security industry. I’ve been on the speaker’s circuit with Veracode veterans Chris Eng and Chris Wysopal for a time, and have no doubt Chris Eng will push us to think about the coming privacy concerns that IoT will represent for all of us as consumers.
Andrew Morris, Flaying out the Blockchain Ledger for Fun, Profit, and Hip Hop, August 2, 2:00 – 2:55 pm.
And
Rod Soto & Joseph Zadeh, No Silver Bullet. Multi contextual threat detection via Machine Learning, August 3, 10:35 – 11:30 am.
Blockchains and how they might be used to build trust models and secure things is a hot topic in security circles. Machine learning is no different and is a potential game changer for the industry, making this session worthy of attendance. If you can’t make these, make sure to catch at least one other on blockchains and machine learning because they will likely have a huge effect on what we do.
Matteo Beccaro and Matteo Collura, (Ab)using Smart Cities: The Dark Age of Modern Mobility, August 4, 1:00 pm.
With everything connected, the doomsday scenarios of shutting down a city becomes less and less science fiction and more and more someone’s problem to solve. This session will either get you thinking or make you buy that small home in the country off the grid. Should be fun.
Evan Booth, Jittery MacGyver: Lessons Learned from Building a Bionic Hand out of a Coffee Maker, August 6, 11:00 am.
From a pure curiosity standpoint, this session might be worth attending. You’ll never look at that office coffee maker the same way again either way.
Fred Bret-Mounet, All Your Solar Panels Are Belong to Me, August 6, 4:30 pm.
Oh my! Last year it was guns, this year solar arrays.
I can’t even begin to think of the many bad things that can happen from someone taking over an entire solar array, but I guess we’re going to find out. This will bring an entirely new take on renewable energies – you can new renew your root access credentials conveniently, courtesy of the manufacturer.
As you get a sense, there are hundreds of great sessions next week. None of us will do justice to all of them, but perhaps between physical attendance and social media we won’t miss the Jeep hacking equivalent of 2016. Good luck. We’ll see you out there next week. @johnbdickson.
]]>
If you haven’t seen it yet, Gartner just published its “Hype Cycle for Application Security, 2016” written by Gartner Analyst Ayal Tirosh with support from colleague Lawrence Pingree (Gartner clients can view it at https://www.gartner.com/doc/3376617/hype-cycle-application-security-). This is potentially a deeply important step for the application security market because it provides clarity around a set of emerging ideas involving application vulnerabilities that buyers, vendors and analysts had previously struggled to define. I’ll first lay out what Gartner did, and then I’ll explain why it’s so important. (In the interest of full disclosure, Denim Group’s ThreadFix vulnerability resolution platform is one of the technologies mentioned in the report)
The process for adopting new technology areas is anything but straightforward, so let me put it into proper perspective. Sometimes technology sector names are developed by savvy product marketing managers looking to separate their product from previous technologies and enjoy what’s called “first mover” status. Think of “next generation firewalls” versus “firewalls” (you don’t want to be caught with your pants down with just a plain old firewall when attackers come). Other times industry analysts such as Gartner will come up with a term after listening to a stream of vendor pitches and struggling to characterize an emerging technology that they think is different from what they’ve seen in the past. That appears to be the case in this instance.
In a section in Gartner’s 2016 Hype Cycle Report, Ayal and Lawrence characterize Application Vulnerability Correlation, or AVC, as a technology “on the rise.” They define AVC as “application security workflow and process management tools that aim to streamline SDLC application vulnerability remediation by incorporating findings from a variety of security-testing data sources into a centralized tool.” Put another way, AVC tools accelerate the remediation of vulnerable apps by fully automating the flow of app vulnerabilities between testing tools, centralized application security functions, and the many development teams that actually fix security defects. By automating what now remains an all too manual process, AVC tools enable application security teams to have higher level risk discussions with their development colleagues, which in turn will allow the dev teams to focus on the few most critical vulnerabilities at the expense of the many less critical ones. This workflow automation is even more important with increasing adoption of approaches such as DevOps, Continuous Integration (CI), and Continuous Deployment (CD). Without it, development teams are slowed by security best practices and vulnerabilities persist.
Gartner also listed AVC for the first time on the actual Hype Cycle at the “Innovation Trigger” stage. The Hype Cycle helps bring Gartner clients up to speed on various new technologies, and the Innovation Trigger describes that stage as “A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven.” That’s a pretty conservative definition, and I would beg to differ on the last sentence. (For a good background explanation of the Gartner Hype Cycle and its definitions, visit http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp). Put simply, products in the Innovation Trigger have caught the eye of the analyst and they are worthy of mention to Gartner clients.
There are several reasons why this is important, even if it’s not readily apparent to most. These reasons include:
Settling on a common term (Application Vulnerability Correlation) provides common language between buyers and sellers that drives more efficient adoption of new technologies. Market confusion, on the other hand adds frictions around misunderstanding as buyers and sellers attempt to grapple with agreed upon terms (even today debate still exists around the terms “application security” versus “software security”). Typically developed to address a particular “pain point” that exists in our fast-moving industry, new security products share a common problem – how do you characterize the technology and what do you call it? What do you call the collections of products that protect desktops from attack (endpoint security) or what do you call firewalls that have a certain additional set of capabilities (next generation firewalls)? The best names use the least amount of syllables -NAC is my favorite, which stands for network access control – and are widely adopted and understood.
Finding new products becomes more efficient. As has been widely documented and discussed in the vendor community, buyers are spending more and more time learning about products online prior to engaging vendors. A lack of common terminology hurts an evolving industry most acutely in the Internet search arena. Is the fixing of a vulnerable application post-scan “application vulnerability management” or “application vulnerability resolution?” Regardless of what we want to call it, buyers are using every term under the sun to describe the area, yet are not purchasing from vendors that have products in this area. As a vendor, we look very closely at what Google searches are actually occurring in and around the application security market. We confirmed that no consistent set of search terms have been used over the last several years. Buyers aren’t typing in “application vulnerability management” to find ThreadFix, they are typing in “ThreadFix”, which tells us they didn’t actually find the product via search, but in fact, knew of it prior to conducting a search. With Gartner naming this space, it will help buyers find qualified sellers more efficiently.
New terminology defines what a technology area is not. In the case of AVC, it states unequivocally that it involves applications, not network vulnerability management, not patch management, or anything else not remotely in the application arena. It’s all about the vulnerable applications and how organizations can use multiple technologies (a Gartner recommendation) to get better application testing coverage and to fix vulnerable applications faster. This is particularly important for CISOs or CSOs without a strong application development or security background trying to distinguish AVC from technologies that, although they might share the common term (vulnerability management), could not be more different.
The common denominator of all the reasons listed above involves efficiency and helping define the emerging and fast-moving market that is application security. Perhaps fundamentally more important, though, by naming AVC and putting it for the first time on its Hype Cycle, Gartner will make it harder for its clients to ignore post-scan remediation or to scan only a subset of their application portfolio. It’s been our observation that organizations have become far better at identifying application vulnerabilities than fixing them and we sense this is beginning to change. This small step from Gartner will help swing the focus purely on vulnerability capture, and focus more resources and brainpower on protecting and fixing what most agree is our weakest spot – applications.
]]>If you haven’t seen it yet, Gartner just published its “Hype Cycle for Application Security, 2016” written by Gartner Analyst Ayal Tirosh with support from colleague Lawrence Pingree (Gartner clients can view it at https://www.gartner.com/doc/3376617/hype-cycle-application-security-). This is potentially a deeply important step for the application security market because it provides clarity around a set of emerging ideas involving application vulnerabilities that buyers, vendors and analysts had previously struggled to define. I’ll first lay out what Gartner did, and then I’ll explain why it’s so important. (In the interest of full disclosure, Denim Group’s ThreadFix vulnerability resolution platform is one of the technologies mentioned in the report)
The process for adopting new technology areas is anything but straightforward, so let me put it into proper perspective. Sometimes technology sector names are developed by savvy product marketing managers looking to separate their product from previous technologies and enjoy what’s called “first mover” status. Think of “next generation firewalls” versus “firewalls” (you don’t want to be caught with your pants down with just a plain old firewall when attackers come). Other times industry analysts such as Gartner will come up with a term after listening to a stream of vendor pitches and struggling to characterize an emerging technology that they think is different from what they’ve seen in the past. That appears to be the case in this instance.
In a section in Gartner’s 2016 Hype Cycle Report, Ayal and Lawrence characterize Application Vulnerability Correlation, or AVC, as a technology “on the rise.” They define AVC as “application security workflow and process management tools that aim to streamline SDLC application vulnerability remediation by incorporating findings from a variety of security-testing data sources into a centralized tool.” Put another way, AVC tools accelerate the remediation of vulnerable apps by fully automating the flow of app vulnerabilities between testing tools, centralized application security functions, and the many development teams that actually fix security defects. By automating what now remains an all too manual process, AVC tools enable application security teams to have higher level risk discussions with their development colleagues, which in turn will allow the dev teams to focus on the few most critical vulnerabilities at the expense of the many less critical ones. This workflow automation is even more important with increasing adoption of approaches such as DevOps, Continuous Integration (CI), and Continuous Deployment (CD). Without it, development teams are slowed by security best practices and vulnerabilities persist.
Gartner also listed AVC for the first time on the actual Hype Cycle at the “Innovation Trigger” stage. The Hype Cycle helps bring Gartner clients up to speed on various new technologies, and the Innovation Trigger describes that stage as “A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven.” That’s a pretty conservative definition, and I would beg to differ on the last sentence. (For a good background explanation of the Gartner Hype Cycle and its definitions, visit http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp). Put simply, products in the Innovation Trigger have caught the eye of the analyst and they are worthy of mention to Gartner clients.
There are several reasons why this is important, even if it’s not readily apparent to most. These reasons include:
Settling on a common term (Application Vulnerability Correlation) provides common language between buyers and sellers that drives more efficient adoption of new technologies. Market confusion, on the other hand adds frictions around misunderstanding as buyers and sellers attempt to grapple with agreed upon terms (even today debate still exists around the terms “application security” versus “software security”). Typically developed to address a particular “pain point” that exists in our fast-moving industry, new security products share a common problem – how do you characterize the technology and what do you call it? What do you call the collections of products that protect desktops from attack (endpoint security) or what do you call firewalls that have a certain additional set of capabilities (next generation firewalls)? The best names use the least amount of syllables -NAC is my favorite, which stands for network access control – and are widely adopted and understood.
Finding new products becomes more efficient. As has been widely documented and discussed in the vendor community, buyers are spending more and more time learning about products online prior to engaging vendors. A lack of common terminology hurts an evolving industry most acutely in the Internet search arena. Is the fixing of a vulnerable application post-scan “application vulnerability management” or “application vulnerability resolution?” Regardless of what we want to call it, buyers are using every term under the sun to describe the area, yet are not purchasing from vendors that have products in this area. As a vendor, we look very closely at what Google searches are actually occurring in and around the application security market. We confirmed that no consistent set of search terms have been used over the last several years. Buyers aren’t typing in “application vulnerability management” to find ThreadFix, they are typing in “ThreadFix”, which tells us they didn’t actually find the product via search, but in fact, knew of it prior to conducting a search. With Gartner naming this space, it will help buyers find qualified sellers more efficiently.
New terminology defines what a technology area is not. In the case of AVC, it states unequivocally that it involves applications, not network vulnerability management, not patch management, or anything else not remotely in the application arena. It’s all about the vulnerable applications and how organizations can use multiple technologies (a Gartner recommendation) to get better application testing coverage and to fix vulnerable applications faster. This is particularly important for CISOs or CSOs without a strong application development or security background trying to distinguish AVC from technologies that, although they might share the common term (vulnerability management), could not be more different.
The common denominator of all the reasons listed above involves efficiency and helping define the emerging and fast-moving market that is application security. Perhaps fundamentally more important, though, by naming AVC and putting it for the first time on its Hype Cycle, Gartner will make it harder for its clients to ignore post-scan remediation or to scan only a subset of their application portfolio. It’s been our observation that organizations have become far better at identifying application vulnerabilities than fixing them and we sense this is beginning to change. This small step from Gartner will help swing the focus purely on vulnerability capture, and focus more resources and brainpower on protecting and fixing what most agree is our weakest spot – applications.
]]>
Now that the dust has settled on the annual 2016 Gartner Security and Privacy Symposium, we can look back through a clean lens and identify themes that bubbled to the surface of the different sessions. Although a critical mass of security leaders were in attendance, many were not. It is my hope that those who were not able to attend this year’s Gartner conference will be able to glean a few key trends that came out in this year’s proceedings.
Some background is warranted for those who have never been to Gartner… Within security circles, “Gartner” as it is simply known, has become one of three largest security conferences in North America, a solid third behind the annual BlackHat USA and RSA Conferences. In the run up to Gartner I found a great overview of security conferences written by Tech Beacon:
http://techbeacon.com/top-information-security-conferences-2016
Surprisingly, Tech Beacon was nice enough to include quotes from a 2014 blog post that describes my first Gartner Symposium experience. It struck my 2014 self how different Gartner was from the other major security conferences, namely RSA and BlackHat. As you might imagine, Gartner was, and continues to be, far more corporate – way more blue blazers than Black Hat and RSA combined, and has a different general demographic. The conference also focused much less on the zero days and threat intelligence, which, I may say, was a relief.
What sets Gartner apart from the other conferences is its almost singular focus on the enterprise, with less emphasis on what to call the “attack side” of the business. There is no shortage of industry buzzwords, but RSA, and now Black Hat, are no better in that department. Even if we are reluctant to admit it, Gartner analysts influence the way we think about the industry and are pretty good at characterizing emerging security problems. In that regard, the 2016 Symposium did not disappoint.
Denim Group is a Gartner analyst relations client, so I regularly talk to the likes of leading industry analysts like Neil McDonald, Lawrence Pingree, and Ayal Tirosh. For me, it’s a once-in-a-year opportunity to share insights in person with analysts that cover your technology space, in my case this happens to be application security.
The challenge with Gartner, like many conferences, is that picking the sessions you plan to attend remains a complex process. I wasn’t able to attend every session, but I had a full agenda and covered in person what I could, catching other sessions highlights via Twitter (following the hashtag #gartnersec).
As I suspected, the many could not attend in person, so I took copious notes and reviewed the symposium Twitter stream upon my return home. Gartner’s big statement was that 60% of digital businesses will suffer major service failures due to the inability of security teams to manage digital risk. Many of the thoughts in various sessions flowed from the idea that one must be prepared for the inevitable.
What follows are some of the key takeaways that jumped out at me during the four days I spent at Gartner.
Separating the IoT Hype from Reality: As expected, IoT was a consistent theme during Gartner this year. In his session titled “Practical Steps to Manage Risk and Security in the Internet of Things,” Gartner analyst Earl Perkins kicked off the conference by drawing the parallel between IoT and Operational Technologies already deployed in many corporate environments. Gartner calls Operational Technologies” (or “OT”) everything that includes control systems, SCADA, and other types of sensors. In theory, if you can understand how OT works, you can better understand and prepare for IoT. With that in mind, three things regarding the security of OT, and by extension, IoT, stood out to me as either new or particularly interesting.
First, the security models for operational technology & IoT are radically different from the enterprise security model. Built and managed by engineers for resilience and up time, OT & IoT are focused on safety and availability, but are not really built to accept regular security updates like patches.
Second, the platform, protocols, and vendors for operational technologies are all different and new to security operators – be advised, a learning curve exists for most career security professionals. Seek out the one or two engineers in your organization that understand industrial control and learn everything you can from them.
Finally, IoT will have privacy issues as you’ve never imagined. Understand the features and functionality of IoT devices connecting to your enterprise so you understand the privacy impact. Know even more your company builds IoT devices and sells them. J On a side note, Earl has a webinar July 5th titled “Practical Steps to Manage Risk and Security in the Internet of Things” if you are more interested in the topic. “http://www.gartner.com/webinar/3337817?srcId=1-4554397745
Application Security is Still Mostly Improvisation. As an application security guy, I was keenly interested in Gartner’s update at this year’s Symposium. In general, the major thoughts were a continuation. During his “2016 State of Application Security,” Gartner analyst Ramon Krikken updated attendees on what clients ask him, and to a lesser degree, what trends he observes in the vendor community. The overall theme of Ramon’s session was that clients still have not “solved” the application security problem. As a matter of fact, they are still asking the most basic of questions, including “how do we find and reduce the security vulnerabilities in large numbers of internal and external apps?” Ramon also mentions that he consistently receives the question “how do I make appsec less of a burden on development?”
From those two basic questions, it was obvious that vendors and end clients are making incremental improvements in addressing application security, but by no means are prepared for a CI/CD/Agile world where the speed will greatly increase.
Four observations that stood out for me in Ramon’s session included:
The “train-test-fix” application security model won’t scale for DevOps. Agreed, and this worries lots of application security veterans, including myself. Do we throw out everything we know, and like Etsy and Netflix, wait until vulnerable applications make it into production and tear them down after the fact? Good questions…
Developers should build secure code, not security code. Architect systems so that security checks are external to business logic and built by security experts. I like the concept, but I’m not sure that’s the biggest problem on the ground where many companies still don’t have 100% testing coverage of their applications.
Future state application security will be standardized, externalized, and automated. Gartner has argued that promising technologies such as Runtime Application Security (RASP) and Interactive Application Security Testing (IAST) will enable organizations to address the application security problem via more automation. We agree, but the rapidly evolving landscape of application development languages and frameworks make any silver bullet technology elusive.
Adaptive Security Architecture and blockchains will redefine trust for digital businesses. Blockchains are no longer just about Bitcoin! Gartner views authentication and authorization on a sliding scale, given context and other factors. Blockchains will be incorporated in new trust models to help organizations interact with 3rd parties via different trust levels.
DevOps and Security
The last area of interest to me was Gartner’s refresh on everyone’s favorite “other” buzzword – DevOps. Senior Gartner analyst Neil McDonald delivered a presentation on what he is coining “DevSecOps,” the mashup of DevOps and security. He also released a “Gartner Top 10 Technologies for Information Security” http://www.gartner.com/smarterwithgartner/gartners-top-10-technologies-for-information-security during the Symposium. Neil provided a cautionary warning that security leaders should not lose the battle of perceptions by being a road bump on the path to DevOps progress. He predicted that by 2020, more than 90% of enterprise DevOps initiatives will have incorporated security controls, up from less than 10% at 2015. That seems like a no-brainer, but part of me wonders more broadly what percentage of companies will actually have made the jump to DevOps by 2020, let alone what percent incorporated security controls.
Of interest, Neil released a Gartner survey of 134 IT and security leaders that stated 41% of IT operations staff believed that security policies and teams are slowing IT down. Surprisingly, roughly 37% of security counterparts felt the same way about security polices and their own teams! I was quietly relieved that Gartner didn’t single out CIOs for this survey – I simply didn’t want to know that 100% of CIOs felt that security policies and teams were slowing down them down. In addition to these numbers, several other key takeaways stood out from the Symposium:
DevOps mistakes create the most common vulnerabilities. According to Neil McDonald, the most common DevOps related security vulnerabilities will come from mistakes – misconfigurations and mismanagement. That makes sense –you now scale your mistakes in a once unimaginable way! I think this points to the complexity of certain DevOps functions and the need for DevOps expertise before you step up you DevOps game.
Use Application Security Tools geared for rapid turnaround and high fidelity results. This is where I agree in concept, but in practice I have the most doubt. Enterprise clients still struggled with coverage issues – automated testing coverage and coverage of their entire application portfolios. Although RASP and IAST hold promise, I’m still not sure there’s an “Easy Button” here that both clients, and analysts, yearn for.
If infrastructure is becoming code, then secure coding principles apply to the templates, scripts, recipes and blueprints that drive configuration. One of Neil McDonald’s last key points was that application security must be scalable through the proliferation of secure templates, scripts, and recipes that drive configuration. He’s right, but here’s where automation falls short. I’d argue that what we’re discussing here is analogous to custom business logic and complex authorization rules – something that a smart appsec person needs to design up front. If you have an automation-centric view of solving the appsec problem, this area could be problematic.
To wrap up, the 2016 Gartner Security and Privacy Summit did not disappoint. There was much to absorb, and many sessions I wish I had attended. I’m still analyzing much of the post-Gartner analysis and chatter, and am more than willing to pass on additional perspective, if interested. Finally, if you are a Gartner client and interested, I can email or DM you the actual session links with presentation decks. Just email me at john at denimgroup.com or direct message me on Twitter (@johnbdickson). I’ll be glad to send you a link with more background on the sessions themselves.
There are other Gartner recaps that have made it up on the web. For their respective observations, visit:
Symantec’s Garter Recap (http://www.symantec.com/connect/blogs/gartner-security-risk-management-summit-62016-recap?es_p=2078309)
Tenable Network Security Gartner Recap (https://www.tenable.com/blog/security-in-the-digital-age)
]]>
Now that the dust has settled on the annual 2016 Gartner Security and Privacy Symposium, we can look back through a clean lens and identify themes that bubbled to the surface of the different sessions. Although a critical mass of security leaders were in attendance, many were not. It is my hope that those who were not able to attend this year’s Gartner conference will be able to glean a few key trends that came out in this year’s proceedings.
Some background is warranted for those who have never been to Gartner… Within security circles, “Gartner” as it is simply known, has become one of three largest security conferences in North America, a solid third behind the annual BlackHat USA and RSA Conferences. In the run up to Gartner I found a great overview of security conferences written by Tech Beacon:
http://techbeacon.com/top-information-security-conferences-2016
Surprisingly, Tech Beacon was nice enough to include quotes from a 2014 blog post that describes my first Gartner Symposium experience. It struck my 2014 self how different Gartner was from the other major security conferences, namely RSA and BlackHat. As you might imagine, Gartner was, and continues to be, far more corporate – way more blue blazers than Black Hat and RSA combined, and has a different general demographic. The conference also focused much less on the zero days and threat intelligence, which, I may say, was a relief.
What sets Gartner apart from the other conferences is its almost singular focus on the enterprise, with less emphasis on what to call the “attack side” of the business. There is no shortage of industry buzzwords, but RSA, and now Black Hat, are no better in that department. Even if we are reluctant to admit it, Gartner analysts influence the way we think about the industry and are pretty good at characterizing emerging security problems. In that regard, the 2016 Symposium did not disappoint.
Denim Group is a Gartner analyst relations client, so I regularly talk to the likes of leading industry analysts like Neil McDonald, Lawrence Pingree, and Ayal Tirosh. For me, it’s a once-in-a-year opportunity to share insights in person with analysts that cover your technology space, in my case this happens to be application security.
The challenge with Gartner, like many conferences, is that picking the sessions you plan to attend remains a complex process. I wasn’t able to attend every session, but I had a full agenda and covered in person what I could, catching other sessions highlights via Twitter (following the hashtag #gartnersec).
As I suspected, the many could not attend in person, so I took copious notes and reviewed the symposium Twitter stream upon my return home. Gartner’s big statement was that 60% of digital businesses will suffer major service failures due to the inability of security teams to manage digital risk. Many of the thoughts in various sessions flowed from the idea that one must be prepared for the inevitable.
What follows are some of the key takeaways that jumped out at me during the four days I spent at Gartner.
Separating the IoT Hype from Reality: As expected, IoT was a consistent theme during Gartner this year. In his session titled “Practical Steps to Manage Risk and Security in the Internet of Things,” Gartner analyst Earl Perkins kicked off the conference by drawing the parallel between IoT and Operational Technologies already deployed in many corporate environments. Gartner calls Operational Technologies” (or “OT”) everything that includes control systems, SCADA, and other types of sensors. In theory, if you can understand how OT works, you can better understand and prepare for IoT. With that in mind, three things regarding the security of OT, and by extension, IoT, stood out to me as either new or particularly interesting.
First, the security models for operational technology & IoT are radically different from the enterprise security model. Built and managed by engineers for resilience and up time, OT & IoT are focused on safety and availability, but are not really built to accept regular security updates like patches.
Second, the platform, protocols, and vendors for operational technologies are all different and new to security operators – be advised, a learning curve exists for most career security professionals. Seek out the one or two engineers in your organization that understand industrial control and learn everything you can from them.
Finally, IoT will have privacy issues as you’ve never imagined. Understand the features and functionality of IoT devices connecting to your enterprise so you understand the privacy impact. Know even more your company builds IoT devices and sells them. J On a side note, Earl has a webinar July 5th titled “Practical Steps to Manage Risk and Security in the Internet of Things” if you are more interested in the topic. “http://www.gartner.com/webinar/3337817?srcId=1-4554397745
Application Security is Still Mostly Improvisation. As an application security guy, I was keenly interested in Gartner’s update at this year’s Symposium. In general, the major thoughts were a continuation. During his “2016 State of Application Security,” Gartner analyst Ramon Krikken updated attendees on what clients ask him, and to a lesser degree, what trends he observes in the vendor community. The overall theme of Ramon’s session was that clients still have not “solved” the application security problem. As a matter of fact, they are still asking the most basic of questions, including “how do we find and reduce the security vulnerabilities in large numbers of internal and external apps?” Ramon also mentions that he consistently receives the question “how do I make appsec less of a burden on development?”
From those two basic questions, it was obvious that vendors and end clients are making incremental improvements in addressing application security, but by no means are prepared for a CI/CD/Agile world where the speed will greatly increase.
Four observations that stood out for me in Ramon’s session included:
The “train-test-fix” application security model won’t scale for DevOps. Agreed, and this worries lots of application security veterans, including myself. Do we throw out everything we know, and like Etsy and Netflix, wait until vulnerable applications make it into production and tear them down after the fact? Good questions…
Developers should build secure code, not security code. Architect systems so that security checks are external to business logic and built by security experts. I like the concept, but I’m not sure that’s the biggest problem on the ground where many companies still don’t have 100% testing coverage of their applications.
Future state application security will be standardized, externalized, and automated. Gartner has argued that promising technologies such as Runtime Application Security (RASP) and Interactive Application Security Testing (IAST) will enable organizations to address the application security problem via more automation. We agree, but the rapidly evolving landscape of application development languages and frameworks make any silver bullet technology elusive.
Adaptive Security Architecture and blockchains will redefine trust for digital businesses. Blockchains are no longer just about Bitcoin! Gartner views authentication and authorization on a sliding scale, given context and other factors. Blockchains will be incorporated in new trust models to help organizations interact with 3rd parties via different trust levels.
DevOps and Security
The last area of interest to me was Gartner’s refresh on everyone’s favorite “other” buzzword – DevOps. Senior Gartner analyst Neil McDonald delivered a presentation on what he is coining “DevSecOps,” the mashup of DevOps and security. He also released a “Gartner Top 10 Technologies for Information Security” http://www.gartner.com/smarterwithgartner/gartners-top-10-technologies-for-information-security during the Symposium. Neil provided a cautionary warning that security leaders should not lose the battle of perceptions by being a road bump on the path to DevOps progress. He predicted that by 2020, more than 90% of enterprise DevOps initiatives will have incorporated security controls, up from less than 10% at 2015. That seems like a no-brainer, but part of me wonders more broadly what percentage of companies will actually have made the jump to DevOps by 2020, let alone what percent incorporated security controls.
Of interest, Neil released a Gartner survey of 134 IT and security leaders that stated 41% of IT operations staff believed that security policies and teams are slowing IT down. Surprisingly, roughly 37% of security counterparts felt the same way about security polices and their own teams! I was quietly relieved that Gartner didn’t single out CIOs for this survey – I simply didn’t want to know that 100% of CIOs felt that security policies and teams were slowing down them down. In addition to these numbers, several other key takeaways stood out from the Symposium:
DevOps mistakes create the most common vulnerabilities. According to Neil McDonald, the most common DevOps related security vulnerabilities will come from mistakes – misconfigurations and mismanagement. That makes sense –you now scale your mistakes in a once unimaginable way! I think this points to the complexity of certain DevOps functions and the need for DevOps expertise before you step up you DevOps game.
Use Application Security Tools geared for rapid turnaround and high fidelity results. This is where I agree in concept, but in practice I have the most doubt. Enterprise clients still struggled with coverage issues – automated testing coverage and coverage of their entire application portfolios. Although RASP and IAST hold promise, I’m still not sure there’s an “Easy Button” here that both clients, and analysts, yearn for.
If infrastructure is becoming code, then secure coding principles apply to the templates, scripts, recipes and blueprints that drive configuration. One of Neil McDonald’s last key points was that application security must be scalable through the proliferation of secure templates, scripts, and recipes that drive configuration. He’s right, but here’s where automation falls short. I’d argue that what we’re discussing here is analogous to custom business logic and complex authorization rules – something that a smart appsec person needs to design up front. If you have an automation-centric view of solving the appsec problem, this area could be problematic.
To wrap up, the 2016 Gartner Security and Privacy Summit did not disappoint. There was much to absorb, and many sessions I wish I had attended. I’m still analyzing much of the post-Gartner analysis and chatter, and am more than willing to pass on additional perspective, if interested. Finally, if you are a Gartner client and interested, I can email or DM you the actual session links with presentation decks. Just email me at john at denimgroup.com or direct message me on Twitter (@johnbdickson). I’ll be glad to send you a link with more background on the sessions themselves.
There are other Gartner recaps that have made it up on the web. For their respective observations, visit:
Symantec’s Garter Recap (http://www.symantec.com/connect/blogs/gartner-security-risk-management-summit-62016-recap?es_p=2078309)
Tenable Network Security Gartner Recap (https://www.tenable.com/blog/security-in-the-digital-age)
]]>We ran a webinar for the upcoming ThreadFix 2.4 Enterprise release. Slides and a video recording of the webinar are available here:
There were a couple of items that came up during the presentation where I wanted to provide some additional detail and links to resources:
There were also a couple of questions that were asked, but that we accidentally didn’t answer because of a mixup with the chat system. To follow up on those:
Thanks to everyone who attended and participated in the discussion.
]]>We ran a webinar for the upcoming ThreadFix 2.4 Enterprise release. Slides and a video recording of the webinar are available here:
There were a couple of items that came up during the presentation where I wanted to provide some additional detail and links to resources:
There were also a couple of questions that were asked, but that we accidentally didn’t answer because of a mixup with the chat system. To follow up on those:
Thanks to everyone who attended and participated in the discussion.
]]>I recently gave a presentation at the TEDx San Antonio conference on March 5th, 2016 held at Rackspace Global Headquarters. This was a tremendous experience and I got to meet and share ideas with a bunch of great folks. Here’s a video of the talk:
And here’s an interview I did with Jennifer Navarrete afterward where I got to expand on some of the topics from the talk:
This was probably the most challenging presentation I’ve given to date for a number of reasons:
Starting out, I knew I wanted to speak in terms that I felt would resonate with the audience. So being “technically correct” – which is the best kind of correct for us pedantic, technology people – had to take a back seat to using words that would immediately resonate with the audience. I didn’t have the time to slog through a lot of definitions, and, more important, I didn’t figure the audience really cared. So I used the dreaded word “cyber” in the talk title because the audience knew what it meant. Or at least they thought they did and it got them thinking in the right direction. I tried to talk about “coders,” rather than “software developers,” for the most part with the hope that that would get the idea across while sounding less formal and having fewer syllables. I made a couple of passes over the text of my talk to try and remove jargon and replace it with more natural language, even with a loss of precision and accuracy.
As mentioned above, communicating anything in six minutes is a challenge for me. I had to cut a lot of corners and gloss over a lot of details that I actually think are important. BUT I did have six minutes, so I wanted to make the best of it.
It seems silly to bring this point up, but I wanted to start at a place everyone was familiar with and would agree with and build my argument from there. If I were talking about something everyone knew about and everyone agreed on then I would have probably started with some sort of snarky controversial statement. But I was talking about application security, which I assume basically no one in the audience knew about so I wanted to start on familiar ground. Because “TED” stands for “Technology, Entertainment, Design,” I didn’t figure this would be terribly controversial and would instead be a way to get everyone on the same page.
I think it is also important for folks to understand just how pervasive technology has become. Again – this is a bit of a well-accepted truism at this point, but is important to set the stage for the rest of the talk. After covering this ground hopefully everyone will at least agree that this talk has some bearing on their lives.
I thought it was really important to expand the scope of “security” beyond just a discussion about financial data. Credit card breaches are an easy to understand phenomenon for a layperson, but the impact for the individual is typically not that bad. But I also thought it was really important to highlight the fact that not all security breaches are recoverable – for example the case where medical information is disclosed. If I would have had more time, I would have probably talked through some scenarios about how everyone’s FitBits were trying to kill them – or at least how they could try if directed by some malicious hacker. But there just wasn’t time.
Along those same lines, a week or so after my talk I ran into someone when I was out to lunch who recognized that I was wearing the same t-shirt that I had worn to TEDx. I wasn’t terribly surprised that I was wearing the same shirt because I own about five shirts that I just rotate on a daily basis and they’re all CrossFit or GoRuck themed. I was however surprised that someone recognized me out wandering around in the “real world” after my talk. He asked me, “Why didn’t you talk about the FBI trying to break the iPhone encryption?”
There were a variety of reasons I didn’t. First of all, the Apple/FBI news broke well after the talk had been pretty well “baked in,” but also, I had really limited time. If application security was a topic that was all but impossible to cram into six minutes, opening the can of worms around an encryption debate was a non-starter. But I liked that I got that question – kinda because someone recognized me from my TEDx talk so that now means I’m famous, but also because he linked the questions about breaking into iPhone encryption with the idea of cybersecurity and secure systems in my talk.
Hopefully people who watch the talk will walk away with a better feeling about how the security of the technologies they use can potentially impact their lives so they see that security isn’t just about financial info, that it isn’t just something banks and hospitals have to worry about, but instead is something that all organizations and individuals need to at least consider.
To get folks thinking a bit more deeply about the technologies pervasive in their lives I went on to talk about how software really forms the underpinnings of all the cool technology innovations these days. Hardware is something that is very … tangible. So, it is easy to think of technology as racks of servers connected by miles of cables. But these days that is really just a bunch of plumbing. As I said in the talk, even components thought of as “hardware” are also running software, and anything really valuable and cool you do with technology has the bulk of the heavy lifting and innovation done by software.
So, if the coders are the ones who are really building all the cool technologies that people get to use, that means they are the ones who have to make sure those technologies are built to be secure. This required a bit of hand-waving and proof-by-assertion because the last thing I wanted to do was launch into some sort of formal proof. But for people familiar with how systems are built, this is something they should think more about.
I’m proud to have attended Trinity University, and I was especially proud to have one of my professors, Dr. Paul Myers, in the audience at my TEDx talk. Hopefully, he didn’t take offense at my comment about my education being “reassuringly expensive” or all the tricks we used to play on professors. In addition to a great liberal arts education, Trinity University provided me with a top-tier vocational education to be a professional computer programmer. And the lack of security in my curriculum wasn’t a surprise – I think most universities have a lot of trouble teaching computer security topics. A lot of professors don’t have a strong background in computer security, and those that do are often focused on over-the-horizon research or crypto stuff rather than the more practical concerns that industry practitioners are focused on.
Unfortunately, that means we have created an “installed base” of professional programmers who have insufficient knowledge of secure design and development concepts. The people building the software we rely on often don’t know the most basic of security concepts when they are released into the wild to start developing software. It should come as no surprise that the software these folks release is riddled with security weaknesses and vulnerabilities.
I’ve had the opportunity to speak to a number of undergraduate computer science courses about security and those experiences informed this portion of the talk. Time after time when I’ve talked to students, I’ve found that they’re interested in security, but just don’t have sufficient context to tackle a lot of the topics that “industry” employers will ultimately need them to comprehend. Some of these issues are technical: How do you talk about SQL injection to a student who has never taken a database course? How do you instill an understanding of cross-site scripting (XSS) in a student who has never built a web application? But other issues go beyond the technology. How do you convince a student to care about PCI compliance? How do you get a student to care about HIPAA? The sad truth is that you usually can’t.
That’s what I tried to communicate with my vignette about SQL injection and PCI compliance. Students just don’t care about a lot of things that information security professionals care about. At least, they don’t yet at the education stage of their careers. And the common “person on the street” doesn’t either.
One of the great aspects of preparing for TEDx talks is that you have to do one or more “curation” sessions where you give early versions of your talk for the other speakers as well as for your “curator” – a kind of handler who makes sure that you’re ready to give your talk. (By the way – many thanks to my curator Hart Hoover. Thanks for staying on me and making sure I was ready!)
As I mentioned above, I think there were concerns with the folks running the TEDx conference that a talk about cybersecurity would be too technical, too arcane, and not of interest to the attendees. When I launched into my spiel about databases and SQL injection I could see the body language of the organizers who were at the curation session and they were getting a bit nervous. When I started talking about “PCI-DSS” I could see a couple literally cringe in the back of the room. And that was exactly my point – the average person doesn’t care about security like an information security professional does. Not even close.
Fortunately, the joke worked in the curation session so I kept it in for the actual talk. It got a laugh, which, at the end of the day, was all I really wanted. And hopefully that helped to illustrate the futility of how we all too often try to communicate security concepts to students.
Framing the discussion as one of compliance or one of cryptography is destined to alienate far too large a percentage of those exposed to it. But if we can look at security through the lends of misusing software and with the challenge of making systems resistant to misuse, then I think we have a better chance to pique students’ interest and inspire them to dig deeper. My hope was that this was a simple enough concept for laypersons to take with them.
I feel like the talk ended abruptly. Probably because it did. Just like this paragraph.
I spent a lot of time building up my argument, but because of the background of the attendees I had to spend the vast majority of the time laying out the landscape. Given more time, I would have loved to have spent more time talking about the economic drivers and other incentives shaping how organizations and individuals build software, but:
As it ended up, I did manage to sneak in one final thought that I hope people take with them – in addition to getting coders to ask questions about security, I also want people to go forth in their lives and start asking those questions of the companies providing them with technologies: “What have you done to make sure this technology only does what it is supposed to do?”
Security is all about incentives, and customers have a unique ability to create incentives for the companies they buy from. If the market demands security then companies will do a better job of delivering.
I suppose that you, the reader/viewer, will be the ultimate judge. Personally, I was happy with how it turned out. I did receive some feedback I liked such as:
Feedback like that was really encouraging. But that was from people who are inclined to be nice to me.
Not everyone agrees. I communicated via Google Plus with Dan Borges who said that he “fundamentally disagree(s)” and that I was “promoting misinformation” by suggesting that coders had a high degree of responsibility for the security of today’s technologies. He highlights the human factors that go into many security breaches and my lack of discussion about defense in depth and response times.
Those are fair criticisms of the argument I laid out, but, yet again, I’ll take a cop out and blame my time constraints. Also I’m not sure anyone would have shown up for a talk titled “Cybersecurity: The Coders Have a Role to Play, But There’s Other Stuff to Worry About, Too.”
Information security is obviously far too broad of a concept to boil down into a single presentation, and this is even more so when the intended audience probably hasn’t thought a lot about the topic.
TEDx talks are supposed to be about “ideas worth spreading” and my hope is that folks left my talk with a new perspective on the security of the technologies they use, and a bit more curiosity about the people who actually build those technologies. If I accomplished that, then I’ll consider the endeavor a success.
]]>I recently gave a presentation at the TEDx San Antonio conference on March 5th, 2016 held at Rackspace Global Headquarters. This was a tremendous experience and I got to meet and share ideas with a bunch of great folks. Here’s a video of the talk:
And here’s an interview I did with Jennifer Navarrete afterward where I got to expand on some of the topics from the talk:
This was probably the most challenging presentation I’ve given to date for a number of reasons:
Starting out, I knew I wanted to speak in terms that I felt would resonate with the audience. So being “technically correct” – which is the best kind of correct for us pedantic, technology people – had to take a back seat to using words that would immediately resonate with the audience. I didn’t have the time to slog through a lot of definitions, and, more important, I didn’t figure the audience really cared. So I used the dreaded word “cyber” in the talk title because the audience knew what it meant. Or at least they thought they did and it got them thinking in the right direction. I tried to talk about “coders,” rather than “software developers,” for the most part with the hope that that would get the idea across while sounding less formal and having fewer syllables. I made a couple of passes over the text of my talk to try and remove jargon and replace it with more natural language, even with a loss of precision and accuracy.
As mentioned above, communicating anything in six minutes is a challenge for me. I had to cut a lot of corners and gloss over a lot of details that I actually think are important. BUT I did have six minutes, so I wanted to make the best of it.
It seems silly to bring this point up, but I wanted to start at a place everyone was familiar with and would agree with and build my argument from there. If I were talking about something everyone knew about and everyone agreed on then I would have probably started with some sort of snarky controversial statement. But I was talking about application security, which I assume basically no one in the audience knew about so I wanted to start on familiar ground. Because “TED” stands for “Technology, Entertainment, Design,” I didn’t figure this would be terribly controversial and would instead be a way to get everyone on the same page.
I think it is also important for folks to understand just how pervasive technology has become. Again – this is a bit of a well-accepted truism at this point, but is important to set the stage for the rest of the talk. After covering this ground hopefully everyone will at least agree that this talk has some bearing on their lives.
I thought it was really important to expand the scope of “security” beyond just a discussion about financial data. Credit card breaches are an easy to understand phenomenon for a layperson, but the impact for the individual is typically not that bad. But I also thought it was really important to highlight the fact that not all security breaches are recoverable – for example the case where medical information is disclosed. If I would have had more time, I would have probably talked through some scenarios about how everyone’s FitBits were trying to kill them – or at least how they could try if directed by some malicious hacker. But there just wasn’t time.
Along those same lines, a week or so after my talk I ran into someone when I was out to lunch who recognized that I was wearing the same t-shirt that I had worn to TEDx. I wasn’t terribly surprised that I was wearing the same shirt because I own about five shirts that I just rotate on a daily basis and they’re all CrossFit or GoRuck themed. I was however surprised that someone recognized me out wandering around in the “real world” after my talk. He asked me, “Why didn’t you talk about the FBI trying to break the iPhone encryption?”
There were a variety of reasons I didn’t. First of all, the Apple/FBI news broke well after the talk had been pretty well “baked in,” but also, I had really limited time. If application security was a topic that was all but impossible to cram into six minutes, opening the can of worms around an encryption debate was a non-starter. But I liked that I got that question – kinda because someone recognized me from my TEDx talk so that now means I’m famous, but also because he linked the questions about breaking into iPhone encryption with the idea of cybersecurity and secure systems in my talk.
Hopefully people who watch the talk will walk away with a better feeling about how the security of the technologies they use can potentially impact their lives so they see that security isn’t just about financial info, that it isn’t just something banks and hospitals have to worry about, but instead is something that all organizations and individuals need to at least consider.
To get folks thinking a bit more deeply about the technologies pervasive in their lives I went on to talk about how software really forms the underpinnings of all the cool technology innovations these days. Hardware is something that is very … tangible. So, it is easy to think of technology as racks of servers connected by miles of cables. But these days that is really just a bunch of plumbing. As I said in the talk, even components thought of as “hardware” are also running software, and anything really valuable and cool you do with technology has the bulk of the heavy lifting and innovation done by software.
So, if the coders are the ones who are really building all the cool technologies that people get to use, that means they are the ones who have to make sure those technologies are built to be secure. This required a bit of hand-waving and proof-by-assertion because the last thing I wanted to do was launch into some sort of formal proof. But for people familiar with how systems are built, this is something they should think more about.
I’m proud to have attended Trinity University, and I was especially proud to have one of my professors, Dr. Paul Myers, in the audience at my TEDx talk. Hopefully, he didn’t take offense at my comment about my education being “reassuringly expensive” or all the tricks we used to play on professors. In addition to a great liberal arts education, Trinity University provided me with a top-tier vocational education to be a professional computer programmer. And the lack of security in my curriculum wasn’t a surprise – I think most universities have a lot of trouble teaching computer security topics. A lot of professors don’t have a strong background in computer security, and those that do are often focused on over-the-horizon research or crypto stuff rather than the more practical concerns that industry practitioners are focused on.
Unfortunately, that means we have created an “installed base” of professional programmers who have insufficient knowledge of secure design and development concepts. The people building the software we rely on often don’t know the most basic of security concepts when they are released into the wild to start developing software. It should come as no surprise that the software these folks release is riddled with security weaknesses and vulnerabilities.
I’ve had the opportunity to speak to a number of undergraduate computer science courses about security and those experiences informed this portion of the talk. Time after time when I’ve talked to students, I’ve found that they’re interested in security, but just don’t have sufficient context to tackle a lot of the topics that “industry” employers will ultimately need them to comprehend. Some of these issues are technical: How do you talk about SQL injection to a student who has never taken a database course? How do you instill an understanding of cross-site scripting (XSS) in a student who has never built a web application? But other issues go beyond the technology. How do you convince a student to care about PCI compliance? How do you get a student to care about HIPAA? The sad truth is that you usually can’t.
That’s what I tried to communicate with my vignette about SQL injection and PCI compliance. Students just don’t care about a lot of things that information security professionals care about. At least, they don’t yet at the education stage of their careers. And the common “person on the street” doesn’t either.
One of the great aspects of preparing for TEDx talks is that you have to do one or more “curation” sessions where you give early versions of your talk for the other speakers as well as for your “curator” – a kind of handler who makes sure that you’re ready to give your talk. (By the way – many thanks to my curator Hart Hoover. Thanks for staying on me and making sure I was ready!)
As I mentioned above, I think there were concerns with the folks running the TEDx conference that a talk about cybersecurity would be too technical, too arcane, and not of interest to the attendees. When I launched into my spiel about databases and SQL injection I could see the body language of the organizers who were at the curation session and they were getting a bit nervous. When I started talking about “PCI-DSS” I could see a couple literally cringe in the back of the room. And that was exactly my point – the average person doesn’t care about security like an information security professional does. Not even close.
Fortunately, the joke worked in the curation session so I kept it in for the actual talk. It got a laugh, which, at the end of the day, was all I really wanted. And hopefully that helped to illustrate the futility of how we all too often try to communicate security concepts to students.
Framing the discussion as one of compliance or one of cryptography is destined to alienate far too large a percentage of those exposed to it. But if we can look at security through the lends of misusing software and with the challenge of making systems resistant to misuse, then I think we have a better chance to pique students’ interest and inspire them to dig deeper. My hope was that this was a simple enough concept for laypersons to take with them.
I feel like the talk ended abruptly. Probably because it did. Just like this paragraph.
I spent a lot of time building up my argument, but because of the background of the attendees I had to spend the vast majority of the time laying out the landscape. Given more time, I would have loved to have spent more time talking about the economic drivers and other incentives shaping how organizations and individuals build software, but:
As it ended up, I did manage to sneak in one final thought that I hope people take with them – in addition to getting coders to ask questions about security, I also want people to go forth in their lives and start asking those questions of the companies providing them with technologies: “What have you done to make sure this technology only does what it is supposed to do?”
Security is all about incentives, and customers have a unique ability to create incentives for the companies they buy from. If the market demands security then companies will do a better job of delivering.
I suppose that you, the reader/viewer, will be the ultimate judge. Personally, I was happy with how it turned out. I did receive some feedback I liked such as:
Feedback like that was really encouraging. But that was from people who are inclined to be nice to me.
Not everyone agrees. I communicated via Google Plus with Dan Borges who said that he “fundamentally disagree(s)” and that I was “promoting misinformation” by suggesting that coders had a high degree of responsibility for the security of today’s technologies. He highlights the human factors that go into many security breaches and my lack of discussion about defense in depth and response times.
Those are fair criticisms of the argument I laid out, but, yet again, I’ll take a cop out and blame my time constraints. Also I’m not sure anyone would have shown up for a talk titled “Cybersecurity: The Coders Have a Role to Play, But There’s Other Stuff to Worry About, Too.”
Information security is obviously far too broad of a concept to boil down into a single presentation, and this is even more so when the intended audience probably hasn’t thought a lot about the topic.
TEDx talks are supposed to be about “ideas worth spreading” and my hope is that folks left my talk with a new perspective on the security of the technologies they use, and a bit more curiosity about the people who actually build those technologies. If I accomplished that, then I’ll consider the endeavor a success.
]]>I recently had the opportunity to speak with Zachary Fryer-Biggs of IHS Jane’s at RSA 2016 on the DoD’s expansion into Silicon Valley and its attempt to tap new innovative technology solutions. Zachary’s recent article titled “Defense in Silicon Valley” takes a look at the cultural change the DoD is attempting to adopt and its focus on making it easier for companies to do business with the Pentagon. While the concept is sound, you have to put the concept into proper context of how the government works.
The new Silicon Valley outpost, dubbed Defense Innovation Unit Experimental (DIUx), is the DoD’s venture at integrating innovation and creative thinking into the Pentagon and repositioning the U.S. Military as a technological powerhouse. Unfortunately, despite the focus to change the culture and be more responsive to technological innovations, I feel this approach may be too little too late. While I like the idea of the DIUx, they are still a part of the government and subject to certain rules and regulations. These rules and regulations tend to handcuff innovation rather than foster it.
Take the Federal Acquisition Regulation (FAR) for example. Its rules make working with the government more difficult, and if you don’t happen to be classified as an 8A company, you are even further inhibited to successfully working with the government. This combined with the need for security clearances creates a real barrier for entry, which also leads to a serious lack of IT talent within the government. This combination of forces creates a perfect storm that can drive smaller companies on the cutting edge of innovation to choose not to work with the Pentagon. We sadly came to this conclusion over a year ago. Not because we don’t want to work with the government, but because it is too difficult and cost prohibitive to do so.
I will be attentively watching the progress of the DIUx and wish them the best of luck. Hopefully it will not create the same hurdles and drawbacks of working with traditional government organizations. If there is ever such a thing as a cyber war, we will find ourselves on the losing end unless we can successfully lower the barriers of entry that have been keeping the government behind the innovation 8-ball. The approach of the newly formed DIUx is definitely a step in the right direction, but only time will tell if it will be successful. I just hope we have the time to find out.
]]>I recently had the opportunity to speak with Zachary Fryer-Biggs of IHS Jane’s at RSA 2016 on the DoD’s expansion into Silicon Valley and its attempt to tap new innovative technology solutions. Zachary’s recent article titled “Defense in Silicon Valley” takes a look at the cultural change the DoD is attempting to adopt and its focus on making it easier for companies to do business with the Pentagon. While the concept is sound, you have to put the concept into proper context of how the government works.
The new Silicon Valley outpost, dubbed Defense Innovation Unit Experimental (DIUx), is the DoD’s venture at integrating innovation and creative thinking into the Pentagon and repositioning the U.S. Military as a technological powerhouse. Unfortunately, despite the focus to change the culture and be more responsive to technological innovations, I feel this approach may be too little too late. While I like the idea of the DIUx, they are still a part of the government and subject to certain rules and regulations. These rules and regulations tend to handcuff innovation rather than foster it.
Take the Federal Acquisition Regulation (FAR) for example. Its rules make working with the government more difficult, and if you don’t happen to be classified as an 8A company, you are even further inhibited to successfully working with the government. This combined with the need for security clearances creates a real barrier for entry, which also leads to a serious lack of IT talent within the government. This combination of forces creates a perfect storm that can drive smaller companies on the cutting edge of innovation to choose not to work with the Pentagon. We sadly came to this conclusion over a year ago. Not because we don’t want to work with the government, but because it is too difficult and cost prohibitive to do so.
I will be attentively watching the progress of the DIUx and wish them the best of luck. Hopefully it will not create the same hurdles and drawbacks of working with traditional government organizations. If there is ever such a thing as a cyber war, we will find ourselves on the losing end unless we can successfully lower the barriers of entry that have been keeping the government behind the innovation 8-ball. The approach of the newly formed DIUx is definitely a step in the right direction, but only time will tell if it will be successful. I just hope we have the time to find out.
]]>I recently had the opportunity to speak with Zachary Fryer-Biggs of IHS Jane’s at RSA 2016 on the DoD’s expansion into Silicon Valley and its attempt to tap new innovative technology solutions. Zachary’s recent article titled “Defense in Silicon Valley” takes a look at the cultural change the DoD is attempting to adopt and its focus on making it easier for companies to do business with the Pentagon. While the concept is sound, you have to put the concept into proper context of how the government works.
The new Silicon Valley outpost, dubbed Defense Innovation Unit Experimental (DIUx), is the DoD’s venture at integrating innovation and creative thinking into the Pentagon and repositioning the U.S. Military as a technological powerhouse. Unfortunately, despite the focus to change the culture and be more responsive to technological innovations, I feel this approach may be too little too late. While I like the idea of the DIUx, they are still a part of the government and subject to certain rules and regulations. These rules and regulations tend to handcuff innovation rather than foster it.
Take the Federal Acquisition Regulation (FAR) for example. Its rules make working with the government more difficult, and if you don’t happen to be classified as an 8A company, you are even further inhibited to successfully working with the government. This combined with the need for security clearances creates a real barrier for entry, which also leads to a serious lack of IT talent within the government. This combination of forces creates a perfect storm that can drive smaller companies on the cutting edge of innovation to choose not to work with the Pentagon. We sadly came to this conclusion over a year ago. Not because we don’t want to work with the government, but because it is too difficult and cost prohibitive to do so.
I will be attentively watching the progress of the DIUx and wish them the best of luck. Hopefully it will not create the same hurdles and drawbacks of working with traditional government organizations. If there is ever such a thing as a cyber war, we will find ourselves on the losing end unless we can successfully lower the barriers of entry that have been keeping the government behind the innovation 8-ball. The approach of the newly formed DIUx is definitely a step in the right direction, but only time will tell if it will be successful. I just hope we have the time to find out.
]]>I recently had the opportunity to speak with Zachary Fryer-Biggs of IHS Jane’s at RSA 2016 on the DoD’s expansion into Silicon Valley and its attempt to tap new innovative technology solutions. Zachary’s recent article titled “Defense in Silicon Valley” takes a look at the cultural change the DoD is attempting to adopt and its focus on making it easier for companies to do business with the Pentagon. While the concept is sound, you have to put the concept into proper context of how the government works.
The new Silicon Valley outpost, dubbed Defense Innovation Unit Experimental (DIUx), is the DoD’s venture at integrating innovation and creative thinking into the Pentagon and repositioning the U.S. Military as a technological powerhouse. Unfortunately, despite the focus to change the culture and be more responsive to technological innovations, I feel this approach may be too little too late. While I like the idea of the DIUx, they are still a part of the government and subject to certain rules and regulations. These rules and regulations tend to handcuff innovation rather than foster it.
Take the Federal Acquisition Regulation (FAR) for example. Its rules make working with the government more difficult, and if you don’t happen to be classified as an 8A company, you are even further inhibited to successfully working with the government. This combined with the need for security clearances creates a real barrier for entry, which also leads to a serious lack of IT talent within the government. This combination of forces creates a perfect storm that can drive smaller companies on the cutting edge of innovation to choose not to work with the Pentagon. We sadly came to this conclusion over a year ago. Not because we don’t want to work with the government, but because it is too difficult and cost prohibitive to do so.
I will be attentively watching the progress of the DIUx and wish them the best of luck. Hopefully it will not create the same hurdles and drawbacks of working with traditional government organizations. If there is ever such a thing as a cyber war, we will find ourselves on the losing end unless we can successfully lower the barriers of entry that have been keeping the government behind the innovation 8-ball. The approach of the newly formed DIUx is definitely a step in the right direction, but only time will tell if it will be successful. I just hope we have the time to find out.
]]>Many organizations use ThreadFix as the platform for running application security program – tracking their application portfolio and getting their applications under a cycle of regular security testing. But before you can start getting applications under security management, you have to know about them and get them installed in the system. In this post, we look at some technical means to discover web applications running in your environment and get them loaded into ThreadFix.
As you look to structure and scale your application security program, a critical thing to understand is your organization’s software attack surface. This refers to the set of all software you expose to the world that could be attacked. That’s potentially a lot of software – check out slides 8 – 26 for more background – so to keep yourself sane (and to have any chance of success) you need to draw some lines around what you consider to be in scope and out of scope for your application security program. In addition, you can’t defend attack surface that you don’t know about, so discovery of these application assets is a critical part of the process of standing up a successful application security program. In this blog post, we will look at some technical means for identifying web applications on your network, and how to get those applications under management in ThreadFix.
Web applications are hosted on – wait for it – web servers. And web servers tend to listen on a common set of ports. So – if you know the IP ranges where you are likely to be hosting applications, you can scan those networks and ports to identify likely web applications. Now – just because you’ve identified a web server doesn’t mean you’ve found an individual web application – a web server can host multiple applications or a web app may be hosted by a series of web servers. So once you’ve discovered a web server you will need to do some investigation to get to a final list of actual web applications. But a list of web servers is a great starting point. Once you know about web servers, you can also get them loaded into ThreadFix so that you can start tracking the results of application security testing.
First – I want to introduce a new GitHub repository we’ve turned on: threadfix-examples. This is basically just a home for various utilities and examples of how you can do different things with ThreadFix. The first thing we’ve pushed up there is an initial version of a Python script to identify potential web servers and, optionally, load that data into ThreadFix.
To use the script, you have to install a couple of dependencies. The first of those is the nmap network discovery tool. You can find instructions on downloading and installing nmap for your environment here.
Also, on my OS X laptop where the script was originally developed, I’m running Python 2.7.10 and I needed to install a couple of dependencies:
These Python module dependencies can be installed by running:
pip install python-nmap pip install threadfix_api
So what does the script do? It:
Here’s the script:
And here’s the output when run on a test network:
And here’s what shows up in ThreadFix:
A couple of things to note:
First – identify the data centers where you would expect to be hosting web applications. And while you’re at it make sure it is cool for you to be running nmap on those networks. Then you can run this script on each network segment – probably with a different ThreadFix “team” name for each. That will get you your list of web servers broken down by network. Once you have all of the data loaded in, you will need to do some discovery to figure out what the actual list of applications looks like based on the list of web servers and clean up the list in ThreadFix. This will probably entail renaming applications to something more sensible – versus the IP and port – and probably renaming the ThreadFix “teams” and moving applications between them. So you aren’t done when you run the script, but it should give you a solid base from which to work.
Contact us for help identifying the applications in your organization’s software attack surface.
]]>Many organizations use ThreadFix as the platform for running application security program – tracking their application portfolio and getting their applications under a cycle of regular security testing. But before you can start getting applications under security management, you have to know about them and get them installed in the system. In this post, we look at some technical means to discover web applications running in your environment and get them loaded into ThreadFix.
As you look to structure and scale your application security program, a critical thing to understand is your organization’s software attack surface. This refers to the set of all software you expose to the world that could be attacked. That’s potentially a lot of software – check out slides 8 – 26 for more background – so to keep yourself sane (and to have any chance of success) you need to draw some lines around what you consider to be in scope and out of scope for your application security program. In addition, you can’t defend attack surface that you don’t know about, so discovery of these application assets is a critical part of the process of standing up a successful application security program. In this blog post, we will look at some technical means for identifying web applications on your network, and how to get those applications under management in ThreadFix.
Web applications are hosted on – wait for it – web servers. And web servers tend to listen on a common set of ports. So – if you know the IP ranges where you are likely to be hosting applications, you can scan those networks and ports to identify likely web applications. Now – just because you’ve identified a web server doesn’t mean you’ve found an individual web application – a web server can host multiple applications or a web app may be hosted by a series of web servers. So once you’ve discovered a web server you will need to do some investigation to get to a final list of actual web applications. But a list of web servers is a great starting point. Once you know about web servers, you can also get them loaded into ThreadFix so that you can start tracking the results of application security testing.
First – I want to introduce a new GitHub repository we’ve turned on: threadfix-examples. This is basically just a home for various utilities and examples of how you can do different things with ThreadFix. The first thing we’ve pushed up there is an initial version of a Python script to identify potential web servers and, optionally, load that data into ThreadFix.
To use the script, you have to install a couple of dependencies. The first of those is the nmap network discovery tool. You can find instructions on downloading and installing nmap for your environment here.
Also, on my OS X laptop where the script was originally developed, I’m running Python 2.7.10 and I needed to install a couple of dependencies:
These Python module dependencies can be installed by running:
pip install python-nmap pip install threadfix_api
So what does the script do? It:
Here’s the script:
And here’s the output when run on a test network:
And here’s what shows up in ThreadFix:
A couple of things to note:
First – identify the data centers where you would expect to be hosting web applications. And while you’re at it make sure it is cool for you to be running nmap on those networks. Then you can run this script on each network segment – probably with a different ThreadFix “team” name for each. That will get you your list of web servers broken down by network. Once you have all of the data loaded in, you will need to do some discovery to figure out what the actual list of applications looks like based on the list of web servers and clean up the list in ThreadFix. This will probably entail renaming applications to something more sensible – versus the IP and port – and probably renaming the ThreadFix “teams” and moving applications between them. So you aren’t done when you run the script, but it should give you a solid base from which to work.
Contact us for help identifying the applications in your organization’s software attack surface.
]]>Starting an application security program can be very challenging. If you don’t know how to get started – or if you can’t seem to get any traction getting your organization to change its ways – consider changing your focus and instead beat up on your vendors.
Creating an internal application security program is hard. You have to get development and IT operations teams to change the way they work and shoulder additional burdens. This will likely cost them time they didn’t allocate and money they didn’t budget. And the lines of business these teams support are often laser-focused on developing new features and capabilities to delight customers and compete in an increasingly-cutthroat competitive business environment. Organization that take their eye off the ball with regard to competition and innovation are destined to die off – quickly. Failing to properly address security concerns might kill you – someday. The immediacy of competition and innovation requirements and the remoteness of pain from security failures causes many organizations to view application security as a nice-to-do/good hygiene activity rather than an imperative.
Security teams are great at understanding risk, but they don’t tend to make money for their organizations. Instead, they are often perceived as the “department of ‘no’” – holding up key initiatives – and as a cost center. In a perfect world this wouldn’t be the case – security teams would be universally perceived as a trusted resource to rely on for sage advice about managing risk while enabling business in an uncertain and dangerous world. And in leading organizations this is the case. However – look at your security budget and think about how many times you have met with your CEO in the past year, and you may have an indicator of how “leading” your organization is when it comes to security and risk management.
So – if your security team isn’t at the “pointy end of the spear” in your organization, what can you do to to change the landscape? One possible answer: beat up your vendors.
It is often easier to get vendors to comply with your security requests than internal teams because of the Golden Rule. And the Golden Rule is – for our purposes today – the group with the gold makes the rules. So let’s see how that works.
As discussed above, when you are negotiating with internal development teams and lines of business, security is perceived as a cost center and an impediment to accomplishing forward progress. To bend these teams to your will you have to use all of your guile and tricks but you will always be fighting an uphill battle. John Dickson’s whitepaper “Turning the Tide” provides strategies for fighting – and winning – this battle. But battles are hard.
When you start requiring vendors to meet security requirements, the balance of power shifts. Now you enlist your vendors’ salesforce as your allies. If procurements are being held up pending development and security teams meeting documented requirements, their sales people – whose commissions are being held up – become your best enforcers. Rather than starting your application security journey by undertaking a major internal process improvement initiative, you can start improving the security of your application portfolio by changing a few lines in your procurement contracts.
First of all, unfortunately this probably will not help you with your most critical applications at the outset. In many organizations, the riskiest applications holding the most sensitive data are developed in-house. Let’s not discount the value of accounting, ERP, and industry-specific applications you have deployed, but you also need to realize that your custom software is probably responsible for the majority amount of your risk, and this approach won’t start helping you address that just yet. But – at least this is a way to get started and a way to introduce the concept of applications putting your enterprise at risk.
Also, this works far better when onboarding new vendors because they’re looking for a sale and are inclined to make concessions to close the deal. Expect this approach to work less well with existing vendors because in those cases you already have a defined relationship and changing the terms is much harder.
Karma has a way of enforcing “what goes around comes around” (some who provided feedback on this post also suggested that “stuff rolls down hill”) so you should expect that this same request will be levied on you by your customers. Look forward to it! That will provide you with more ammunition to help accelerate your efforts to improve the security of the code you are developing. All of a sudden your company’s salesforce will be on your side in promoting secure software development. Because when the security of software is holding up sales, then software security suddenly becomes a “must do” rather than a “nice to do.”
In a perfect world, organizations would be more open to voluntarily evolving their practices to develop more secure software. Unfortunately, this often isn’t the case and in those situations, a focus on vendor and supply-chain software security can be a way to jump start progress, improving the security of at least some portion of your application portfolio.
Contact us for help keeping your vendors accountable for software security.
]]>Starting an application security program can be very challenging. If you don’t know how to get started – or if you can’t seem to get any traction getting your organization to change its ways – consider changing your focus and instead beat up on your vendors.
Creating an internal application security program is hard. You have to get development and IT operations teams to change the way they work and shoulder additional burdens. This will likely cost them time they didn’t allocate and money they didn’t budget. And the lines of business these teams support are often laser-focused on developing new features and capabilities to delight customers and compete in an increasingly-cutthroat competitive business environment. Organization that take their eye off the ball with regard to competition and innovation are destined to die off – quickly. Failing to properly address security concerns might kill you – someday. The immediacy of competition and innovation requirements and the remoteness of pain from security failures causes many organizations to view application security as a nice-to-do/good hygiene activity rather than an imperative.
Security teams are great at understanding risk, but they don’t tend to make money for their organizations. Instead, they are often perceived as the “department of ‘no’” – holding up key initiatives – and as a cost center. In a perfect world this wouldn’t be the case – security teams would be universally perceived as a trusted resource to rely on for sage advice about managing risk while enabling business in an uncertain and dangerous world. And in leading organizations this is the case. However – look at your security budget and think about how many times you have met with your CEO in the past year, and you may have an indicator of how “leading” your organization is when it comes to security and risk management.
So – if your security team isn’t at the “pointy end of the spear” in your organization, what can you do to to change the landscape? One possible answer: beat up your vendors.
It is often easier to get vendors to comply with your security requests than internal teams because of the Golden Rule. And the Golden Rule is – for our purposes today – the group with the gold makes the rules. So let’s see how that works.
As discussed above, when you are negotiating with internal development teams and lines of business, security is perceived as a cost center and an impediment to accomplishing forward progress. To bend these teams to your will you have to use all of your guile and tricks but you will always be fighting an uphill battle. John Dickson’s whitepaper “Turning the Tide” provides strategies for fighting – and winning – this battle. But battles are hard.
When you start requiring vendors to meet security requirements, the balance of power shifts. Now you enlist your vendors’ salesforce as your allies. If procurements are being held up pending development and security teams meeting documented requirements, their sales people – whose commissions are being held up – become your best enforcers. Rather than starting your application security journey by undertaking a major internal process improvement initiative, you can start improving the security of your application portfolio by changing a few lines in your procurement contracts.
First of all, unfortunately this probably will not help you with your most critical applications at the outset. In many organizations, the riskiest applications holding the most sensitive data are developed in-house. Let’s not discount the value of accounting, ERP, and industry-specific applications you have deployed, but you also need to realize that your custom software is probably responsible for the majority amount of your risk, and this approach won’t start helping you address that just yet. But – at least this is a way to get started and a way to introduce the concept of applications putting your enterprise at risk.
Also, this works far better when onboarding new vendors because they’re looking for a sale and are inclined to make concessions to close the deal. Expect this approach to work less well with existing vendors because in those cases you already have a defined relationship and changing the terms is much harder.
Karma has a way of enforcing “what goes around comes around” (some who provided feedback on this post also suggested that “stuff rolls down hill”) so you should expect that this same request will be levied on you by your customers. Look forward to it! That will provide you with more ammunition to help accelerate your efforts to improve the security of the code you are developing. All of a sudden your company’s salesforce will be on your side in promoting secure software development. Because when the security of software is holding up sales, then software security suddenly becomes a “must do” rather than a “nice to do.”
In a perfect world, organizations would be more open to voluntarily evolving their practices to develop more secure software. Unfortunately, this often isn’t the case and in those situations, a focus on vendor and supply-chain software security can be a way to jump start progress, improving the security of at least some portion of your application portfolio.
Contact us for help keeping your vendors accountable for software security.
]]>ThreadFix is currently optimized to help with vulnerability management – importing vulnerability data from various sources, performing triage on the imported vulnerabilities, and then communicating the triaged vulnerabilities to the tools that developers use for resolution. Some organizations have also been using ThreadFix to help track their threat modeling programs. By using some of ThreadFix’s capabilities in a slightly different way it is possible to centralize both threat and vulnerability tracking inside of ThreadFix.
Most of the organizations we work with using ThreadFix to track threats and threat models are using some variant of the Microsoft-style of threat modeling that relies on Yourdon-DeMarco data flow diagrams and the STRIDE threat classification taxonomy. Some of them are even using Microsoft’s Threat Modeling Tool. Other less formal efforts often use whiteboards and text documents to capture the assets, data flows, and identified threats.
The goals for these teams are typically:
The remainder of this article will look at how using the existing capabilities of ThreadFix can support these aims.
To begin, we assume that the team has conducted their threat modeling exercise for an application, either resulting in an informal model:
Or a more formal one:
To track these raw threat models, teams can use ThreadFix’s ability to store files with an application. Analysts can upload these files from the Files tab on the Application page:
Image files can be viewed in the browser, and the files can also be downloaded for later user and editing. This isn’t fancy, but it allows for teams to store and track the threat models alongside all of the vulnerability data being managed inside of ThreadFix. Teams can refer back to the model and upload new ones as the application evolves.
In addition to tracking the raw results of threat modeling activities, individual threats can be tracked in ThreadFix, assigned to development teams, and then the developers’ progress addressing these threats with mitigations can be tracked. This allows security teams to maintain lists of threats alongside vulnerabilities and keep tabs on the current security state of systems in development as well as in maintenance.
As mentioned before, ThreadFix is currently optimized for tracking vulnerabilities, so to track threats we have to characterize them as vulnerabilities so we have a place to store them. This is a little goofy but not a completely unreasonable thing to do. Using ThreadFix’s ability to track manually identified findings from penetration tests usually provides the best fit. Threats can be entered like this:
This will require the threat to be associated with a CWE type and something needs to be entered for either the URL or parameter field. Once that is done, the entry is tracked alongside the results of any vulnerability testing that might have been done against the system. To help highlight these items as threats, ThreadFix’s vulnerability tagging can be used. This allows analysts to filter and report specifically on items that have been created to track threats.
Creating a “threat” tag for items can be handled on the Customize -> Tag tag management page:
Then this “threat” tag can be applied to the items created as threats on the Vulnerability Details page:
After tagging these entries as threats, they can then be viewed on the main application page:
And can reported on via the vulnerability tag filter capability:
Now that individual threats have been entered in the system, they can be communicated to development teams via their defect tracking systems. This assumes that a defect tracker has been configured in ThreadFix and associated with the application for which we are tracking threats. The online defect tracker documentation outlines how to accomplish this.
To communicate the threat and recommended countermeasures to the development team, select it on the Application page and use the Action menu to Create Defect:
Then you can select the appropriate Issue Type for the issue tracker, and the form will be updated with the appropriate metadata from the defect trackers. This is an opportunity for security and development to create a custom template in the defect tracker that is specific to addressing issues identified during threat modeling. Based on the requirements of the selected issue type, the analyst can provide the appropriate data and submit the defect to the defect tracking system:
Now the threat and recommended countermeasures exists in the development team’s issue tracking system:
Development teams can then work through whatever process they have for assigning tasks to individual developers. When the task is marked completed in the defect tracking system, ThreadFix will update the status of the threat so security analysts can check the developer’s work and manually mark the threat as closed when they are satisfied with the outcome:
This approach provides:
So there you have it – tracking threat models and threats inside of ThreadFix alongside vulnerabilities found by various types of testing. There are a couple of hoops to jump through, but it gives teams a “single pane of glass” and traceability throughout the threat identification and mitigation lifecycle.
Contact us for help unifying your threat and vulnerability tracking.
]]>ThreadFix is currently optimized to help with vulnerability management – importing vulnerability data from various sources, performing triage on the imported vulnerabilities, and then communicating the triaged vulnerabilities to the tools that developers use for resolution. Some organizations have also been using ThreadFix to help track their threat modeling programs. By using some of ThreadFix’s capabilities in a slightly different way it is possible to centralize both threat and vulnerability tracking inside of ThreadFix.
Most of the organizations we work with using ThreadFix to track threats and threat models are using some variant of the Microsoft-style of threat modeling that relies on Yourdon-DeMarco data flow diagrams and the STRIDE threat classification taxonomy. Some of them are even using Microsoft’s Threat Modeling Tool. Other less formal efforts often use whiteboards and text documents to capture the assets, data flows, and identified threats.
The goals for these teams are typically:
The remainder of this article will look at how using the existing capabilities of ThreadFix can support these aims.
To begin, we assume that the team has conducted their threat modeling exercise for an application, either resulting in an informal model:
Or a more formal one:
To track these raw threat models, teams can use ThreadFix’s ability to store files with an application. Analysts can upload these files from the Files tab on the Application page:
Image files can be viewed in the browser, and the files can also be downloaded for later user and editing. This isn’t fancy, but it allows for teams to store and track the threat models alongside all of the vulnerability data being managed inside of ThreadFix. Teams can refer back to the model and upload new ones as the application evolves.
In addition to tracking the raw results of threat modeling activities, individual threats can be tracked in ThreadFix, assigned to development teams, and then the developers’ progress addressing these threats with mitigations can be tracked. This allows security teams to maintain lists of threats alongside vulnerabilities and keep tabs on the current security state of systems in development as well as in maintenance.
As mentioned before, ThreadFix is currently optimized for tracking vulnerabilities, so to track threats we have to characterize them as vulnerabilities so we have a place to store them. This is a little goofy but not a completely unreasonable thing to do. Using ThreadFix’s ability to track manually identified findings from penetration tests usually provides the best fit. Threats can be entered like this:
This will require the threat to be associated with a CWE type and something needs to be entered for either the URL or parameter field. Once that is done, the entry is tracked alongside the results of any vulnerability testing that might have been done against the system. To help highlight these items as threats, ThreadFix’s vulnerability tagging can be used. This allows analysts to filter and report specifically on items that have been created to track threats.
Creating a “threat” tag for items can be handled on the Customize -> Tag tag management page:
Then this “threat” tag can be applied to the items created as threats on the Vulnerability Details page:
After tagging these entries as threats, they can then be viewed on the main application page:
And can reported on via the vulnerability tag filter capability:
Now that individual threats have been entered in the system, they can be communicated to development teams via their defect tracking systems. This assumes that a defect tracker has been configured in ThreadFix and associated with the application for which we are tracking threats. The online defect tracker documentation outlines how to accomplish this.
To communicate the threat and recommended countermeasures to the development team, select it on the Application page and use the Action menu to Create Defect:
Then you can select the appropriate Issue Type for the issue tracker, and the form will be updated with the appropriate metadata from the defect trackers. This is an opportunity for security and development to create a custom template in the defect tracker that is specific to addressing issues identified during threat modeling. Based on the requirements of the selected issue type, the analyst can provide the appropriate data and submit the defect to the defect tracking system:
Now the threat and recommended countermeasures exists in the development team’s issue tracking system:
Development teams can then work through whatever process they have for assigning tasks to individual developers. When the task is marked completed in the defect tracking system, ThreadFix will update the status of the threat so security analysts can check the developer’s work and manually mark the threat as closed when they are satisfied with the outcome:
This approach provides:
So there you have it – tracking threat models and threats inside of ThreadFix alongside vulnerabilities found by various types of testing. There are a couple of hoops to jump through, but it gives teams a “single pane of glass” and traceability throughout the threat identification and mitigation lifecycle.
Contact us for help unifying your threat and vulnerability tracking.
]]>I had the opportunity to lead a Peer2Peer session at RSA 2016 that asked attendees to talk about how they do vulnerability management for different types of vulnerabilities. In particular, what I wanted to discuss were the similarities and differences in how organizations deal with network and infrastructure vulnerabilities versus application-level vulnerabilities.
We had a capacity crowd at the session, and a couple of folks actually couldn’t make it into the room because we ran out of seats. So popular! The group was pretty diverse with industries ranging from high-tech, healthcare, government, and retail and a variety of company sizes ranging from well-funded startups to some of the world’s largest companies. This was great because we got to hear from folks with a wide-ranging set of perspectives.
As expected, most organizations had their network and infrastructure testing pretty well squared away, and many said that their scanning program basically ran continuously – finishing up coverage of their IP range and then starting over again immediately. Identified vulnerabilities get communicated to some sort of operations team to be addressed and, in general, most issues can be addressed. Application testing programs tended to be a lot less mature. Testing was not done as frequently and the resolution pathway for the identified vulnerabilities wasn’t as well-defined. In a handful of organizations, application vulnerabilities are being pushed to developer defect trackers. I view this as a “table stakes” requirement to start making headway in an application vulnerability management program. But overall the testing programs and vulnerability management protocols weren’t as mature for applications when compared with infrastructure and network vulnerabilities. Also it was more common to have more centralized server operations teams, versus application teams that were segmented off in different business units, making communication with those teams more challenging.
One point that came up early in the discussion was that organizations hosting and managing their own applications had materially different vulnerability management workflows than organizations that shipped products to end-users to install. This is because in-house applications vulnerabilities are “fixed” once you’ve deployed the updated code and configuration in your own environment, and vulnerabilities in products provided to end-users aren’t really “fixed” until your end-users have deployed the updates and fixes you’ve provided. So – vulnerability management for product vendors also includes additional steps to notify end-users of vulnerabilities and available updates, as well as providing support to get those updates deployed in the end-user environments.
One participant related a number of interesting points about how their organization had made effective use of a public bug bounty program. They conceded that they received a lot of junk submissions, but, overall, found the quality of submissions was better than what they saw from automated scanning tools (“At least with a bug bounty submission, a human sent it in hoping that they’d get paid for it.”) Another piece of advice on using bug bounties was to triage the results and pay bug bounties quickly. This helps to maintain a good relationship with submitters and makes your program more attractive when compared to other available bug bounty programs. The participant did indicate that their bug bounty program was really only possible because they had a “giant website” and didn’t think they would have experienced the same success if they had a significant need to test internal-facing, partner-facing, or non-web applications. (I might suggest that public-facing mobile applications could benefit from bug bounty programs as well.)
A common lamentation from the group was that there were situations where:
The folks suffering from situation (2) seemed to be centered in the healthcare and government sectors, and everyone had problems with (1). The typical approach in these situations was to get “the business” to sign off on those vulnerabilities that can’t be fixed, to help share the responsibility for the risk being accepted. There was some great discussion on how to best talk about this risk. I think that we, as a profession, need to do a better job of knowing what to say and, possibly more important, knowing what not to say when communicating about risk with management and executives.
As always, I think the best way to apply this knowledge is to take one or two points that resonate with challenge you’re facing in your job, and try them out. It might be looking at trying out a bug bounty program. Or it could be working to get a formal risk-acceptance signoff process put in place for vulnerabilities that are exceptionally hard to patch or fix. Whatever it is – go out and make a change. The real value of the RSA conference in general, and the Peer2Peer sessions specifically, is the opportunity to hear from peers and not-quite-peers, see what they’ve found to be effective in their environments, and look to try those things out in your own. Do it!
I had a great time at this Peer2Peer session and I want to thank the folks at the RSA security conference for giving me the opportunity to facilitate. Even more so I want to thank the folks who participated in the session. We had great participation and a variety of opinions and experiences that were shared.
Contact us for help getting your vulnerability management program on track.
]]>I had the opportunity to lead a Peer2Peer session at RSA 2016 that asked attendees to talk about how they do vulnerability management for different types of vulnerabilities. In particular, what I wanted to discuss were the similarities and differences in how organizations deal with network and infrastructure vulnerabilities versus application-level vulnerabilities.
We had a capacity crowd at the session, and a couple of folks actually couldn’t make it into the room because we ran out of seats. So popular! The group was pretty diverse with industries ranging from high-tech, healthcare, government, and retail and a variety of company sizes ranging from well-funded startups to some of the world’s largest companies. This was great because we got to hear from folks with a wide-ranging set of perspectives.
As expected, most organizations had their network and infrastructure testing pretty well squared away, and many said that their scanning program basically ran continuously – finishing up coverage of their IP range and then starting over again immediately. Identified vulnerabilities get communicated to some sort of operations team to be addressed and, in general, most issues can be addressed. Application testing programs tended to be a lot less mature. Testing was not done as frequently and the resolution pathway for the identified vulnerabilities wasn’t as well-defined. In a handful of organizations, application vulnerabilities are being pushed to developer defect trackers. I view this as a “table stakes” requirement to start making headway in an application vulnerability management program. But overall the testing programs and vulnerability management protocols weren’t as mature for applications when compared with infrastructure and network vulnerabilities. Also it was more common to have more centralized server operations teams, versus application teams that were segmented off in different business units, making communication with those teams more challenging.
One point that came up early in the discussion was that organizations hosting and managing their own applications had materially different vulnerability management workflows than organizations that shipped products to end-users to install. This is because in-house applications vulnerabilities are “fixed” once you’ve deployed the updated code and configuration in your own environment, and vulnerabilities in products provided to end-users aren’t really “fixed” until your end-users have deployed the updates and fixes you’ve provided. So – vulnerability management for product vendors also includes additional steps to notify end-users of vulnerabilities and available updates, as well as providing support to get those updates deployed in the end-user environments.
One participant related a number of interesting points about how their organization had made effective use of a public bug bounty program. They conceded that they received a lot of junk submissions, but, overall, found the quality of submissions was better than what they saw from automated scanning tools (“At least with a bug bounty submission, a human sent it in hoping that they’d get paid for it.”) Another piece of advice on using bug bounties was to triage the results and pay bug bounties quickly. This helps to maintain a good relationship with submitters and makes your program more attractive when compared to other available bug bounty programs. The participant did indicate that their bug bounty program was really only possible because they had a “giant website” and didn’t think they would have experienced the same success if they had a significant need to test internal-facing, partner-facing, or non-web applications. (I might suggest that public-facing mobile applications could benefit from bug bounty programs as well.)
A common lamentation from the group was that there were situations where:
The folks suffering from situation (2) seemed to be centered in the healthcare and government sectors, and everyone had problems with (1). The typical approach in these situations was to get “the business” to sign off on those vulnerabilities that can’t be fixed, to help share the responsibility for the risk being accepted. There was some great discussion on how to best talk about this risk. I think that we, as a profession, need to do a better job of knowing what to say and, possibly more important, knowing what not to say when communicating about risk with management and executives.
As always, I think the best way to apply this knowledge is to take one or two points that resonate with challenge you’re facing in your job, and try them out. It might be looking at trying out a bug bounty program. Or it could be working to get a formal risk-acceptance signoff process put in place for vulnerabilities that are exceptionally hard to patch or fix. Whatever it is – go out and make a change. The real value of the RSA conference in general, and the Peer2Peer sessions specifically, is the opportunity to hear from peers and not-quite-peers, see what they’ve found to be effective in their environments, and look to try those things out in your own. Do it!
I had a great time at this Peer2Peer session and I want to thank the folks at the RSA security conference for giving me the opportunity to facilitate. Even more so I want to thank the folks who participated in the session. We had great participation and a variety of opinions and experiences that were shared.
Contact us for help getting your vulnerability management program on track.
]]>Originally published on DevOps.com
Creating a software security initiative in any organization is no easy feat. Often times, organizational culture or politics can provide development managers with a strong counterargument for implementing software security concepts. Unfortunately, building software without a consideration for security has become a less viable option given the increase in compliance pressures and widely publicized data breaches leading software consumers to expect more security from developers.
There is no way around the fact that attacks have become increasingly more sophisticated and applications have become the central avenue of these attacks. As a result, organizations are increasingly confronted with the task of assuring secure, defect free, software development. Leaders within these organizations are finally coming to realize that vulnerable and defect rampant software undermines the productivity, privacy and security of their businesses, employees and consumers alike.
As organizations are beginning to accept the fact that there is a need to test for best practices and defects throughout the development lifecycle, it is important that they are provided with the resources to make this happen. To change the way large organizations build software, an enterprise-wide initiative is required. At its core, an initiative to change the way an organization tests and validates its development processes and practices are a fundamental business process improvement effort.
Typically, large organizations with unique operational requirements find themselves building custom software systems to address their specific needs. But building custom software on time, on budget, and without bugs is difficult. Building software that also complies with an organization’s software security policies presents an even more difficult challenge. Although the tools and practices to build secure software are maturing, most organizations find internal hurdles to be more daunting. Organizational impediments including culture, differing software development approaches, and short-term business drivers make it more difficult to effect meaningful change. In order for a software security initiative to be successful, organizations must take a phased approach that considers organizational culture at each step.
Sure, there is the option of treating the symptoms by deploying web application firewalls or running an automated scanner against applications, but this does not get to the root cause and solve the problem for most organizations. It highlights that there must be deeper process improvements involving the systems development life cycle, particularly for higher level business logic or authorization vulnerabilities.
Despite the hurdles that exist when approaching the issue of secure development, most organizations do realize that there is a problem that needs to be addressed. Organizations focus on technical means to write more secure code and strategies for putting controls around the software. The next step is to then educate executives on the process of leading a software security initiative as these initiatives are most likely to fail due to organizational issues, not technical issues. This means taking a disciplined approach by characterizing the landscape, securing champions, defining standards and strategy, executing, and then sustain the effort. These steps, tailored to the way an organization operates, will help ensure that corporate-wide efforts to secure applications are as productive as possible.
So in conclusion, it is not an impossible feat to address security at the root cause, software development, as part of standard operating procedure. It just takes a little forward thinking to demonstrate its undeniable value and get the corporate buy-in needed to change the development culture. That being said, once the culture is changed don’t rest on your laurels. Always observe what’s occurring within your organization to determine whether there are new technology risk areas emerging. If you simply focus only on well trodden areas like web applications, you may find yourself behind the 8-ball once again when it comes to new advances in software development, like mobile and the Internet of Things.
]]>Originally published on DevOps.com
Creating a software security initiative in any organization is no easy feat. Often times, organizational culture or politics can provide development managers with a strong counterargument for implementing software security concepts. Unfortunately, building software without a consideration for security has become a less viable option given the increase in compliance pressures and widely publicized data breaches leading software consumers to expect more security from developers.
There is no way around the fact that attacks have become increasingly more sophisticated and applications have become the central avenue of these attacks. As a result, organizations are increasingly confronted with the task of assuring secure, defect free, software development. Leaders within these organizations are finally coming to realize that vulnerable and defect rampant software undermines the productivity, privacy and security of their businesses, employees and consumers alike.
As organizations are beginning to accept the fact that there is a need to test for best practices and defects throughout the development lifecycle, it is important that they are provided with the resources to make this happen. To change the way large organizations build software, an enterprise-wide initiative is required. At its core, an initiative to change the way an organization tests and validates its development processes and practices are a fundamental business process improvement effort.
Typically, large organizations with unique operational requirements find themselves building custom software systems to address their specific needs. But building custom software on time, on budget, and without bugs is difficult. Building software that also complies with an organization’s software security policies presents an even more difficult challenge. Although the tools and practices to build secure software are maturing, most organizations find internal hurdles to be more daunting. Organizational impediments including culture, differing software development approaches, and short-term business drivers make it more difficult to effect meaningful change. In order for a software security initiative to be successful, organizations must take a phased approach that considers organizational culture at each step.
Sure, there is the option of treating the symptoms by deploying web application firewalls or running an automated scanner against applications, but this does not get to the root cause and solve the problem for most organizations. It highlights that there must be deeper process improvements involving the systems development life cycle, particularly for higher level business logic or authorization vulnerabilities.
Despite the hurdles that exist when approaching the issue of secure development, most organizations do realize that there is a problem that needs to be addressed. Organizations focus on technical means to write more secure code and strategies for putting controls around the software. The next step is to then educate executives on the process of leading a software security initiative as these initiatives are most likely to fail due to organizational issues, not technical issues. This means taking a disciplined approach by characterizing the landscape, securing champions, defining standards and strategy, executing, and then sustain the effort. These steps, tailored to the way an organization operates, will help ensure that corporate-wide efforts to secure applications are as productive as possible.
So in conclusion, it is not an impossible feat to address security at the root cause, software development, as part of standard operating procedure. It just takes a little forward thinking to demonstrate its undeniable value and get the corporate buy-in needed to change the development culture. That being said, once the culture is changed don’t rest on your laurels. Always observe what’s occurring within your organization to determine whether there are new technology risk areas emerging. If you simply focus only on well trodden areas like web applications, you may find yourself behind the 8-ball once again when it comes to new advances in software development, like mobile and the Internet of Things.
]]>Originally published on DevOps.com
In today’s fast-paced environment, security often plays second fiddle to deadlines. That means software development doesn’t typically get considered when building secure applications, rather it’s the innovations that can be quickly implemented which take center stage. Unfortunately, ranking short term tactical gain over long term vision is undeniably flawed. Doing so ignores the fact that attacks are more sophisticated than ever before, and applications that have security holes have become a key focal point for those attackers because once a vulnerable application is discovered it becomes easier to compromise a wide set of users at once. But change requires more than drastic measures. It requires an unconventional strategy to adjust culture and behaviors throughout the organization.
Although the tools and practices to build secure, defect-tested software are maturing, most organizations find internal hurdles to be more daunting. Organizational impediments including process, differing software development approaches, and short-term business drivers – like the need to update apps to reflect the latest must-have features and platform support – make it more difficult to effect meaningful change. Those interested in leading secure development initiatives within their organizations face myriad challenges.
Step out of your comfort zone to influence culture, process and strategy change
Often, folks who find themselves in the position to step up to the plate and change culture did not initially plan to do so. The developer who is chosen to lead the charge and establish a process that ensures applications are securely built is extraordinarily smart and devoutly technical. In order to begin changing the world, he or she will need to utilize approaches outside of their comfort zone. Success will rely on the acknowledgement that a full frontal assault on the status quo will not do. Instead, hearts and minds must be won. The skill sets that propel developers’ careers forward must take a back seat to the forces of leadership and persuasion, not coercion, which will ultimately affect the cultural shift in an organization’s application development mindset.
To effectively change the way organizations build software, an enterprise-wide initiative is required – one that accounts for organizational culture at each step. By taking a step-by-step approach, implementing a successful software security initiative does not have to be so daunting. Five proven best practices include taking a disciplined approach by characterizing the landscape, securing champions, defining standards and strategy, executing, and then sustaining the effort. These steps, as outlined below, will help ensure that your corporate-wide efforts to secure applications are as productive as possible.
Characterize the Landscape – Understand the task ahead and craft a realistic strategy for adoption within your organization. Whether that be identifying your organization’s compliance framework and cultural norms, to artifacts of the software security and what software development lifecycles you have in place. Being able to characterize your existing landscape allows you to fill the gap between policy and practice.
Secure Champions – Focus explicitly on the fact that you will need clear support of executive sponsors and other key influencers in the organization in order to be successful. While senior leaders may not understand or care about the minutia of software vulnerabilities, they will appreciate the business impact of a data breach with far-reaching cost, reputation or legal repercussions.
Define Standards and Strategy – You will only have one opportunity to successfully roll out a secure development initiative, meaning you cannot overlook this step. Conduct a risk assessment of applications owned by your organization to identify the most vulnerable applications to provide some qualitative ranking for decision-making. Having a baseline set of practices and procedures that are well thought out, realistically implemented, and reflect what is realistic in your organization is absolutely critical.
Execution – So you’ve done your homework, secured supporters throughout the chain of command and in the field, and laid out your strategy and goals. Now comes the hard part – bringing the issue of software security to the forefront through innovative awareness campaigns. Remember, it will be important to show quick wins, highlight positive behaviors, and do it over and over again; ratcheting up expectations and software security with each iteration.
Sustainment – This follows the successful execution of your software security initiative campaign, which can take between one and two years. To ensure your campaign stays fresh and does not lose momentum, a regular, disciplined update of the regulatory framework must occur. In addition, staying knowledgeable about what is occurring within your organization to determine whether there are new technology risk areas emerging in critical.
Creating a software security initiative is difficult by any measure. From organizational culture or politics to the status quo bias towards meeting deadlines for new features and functionality, there is no limit to the amount of hurdles that need to be overcome. But by demonstrating the importance of looking long term from a security standpoint and winning the hearts and minds of everyone that plays a role, success can be achieved. Organizations that focus only on the tasks ahead of them do not change the world. Long term vision is critical for organizations that endeavor to strategically move, grow and change the status quo.
]]>Originally published on DevOps.com
In today’s fast-paced environment, security often plays second fiddle to deadlines. That means software development doesn’t typically get considered when building secure applications, rather it’s the innovations that can be quickly implemented which take center stage. Unfortunately, ranking short term tactical gain over long term vision is undeniably flawed. Doing so ignores the fact that attacks are more sophisticated than ever before, and applications that have security holes have become a key focal point for those attackers because once a vulnerable application is discovered it becomes easier to compromise a wide set of users at once. But change requires more than drastic measures. It requires an unconventional strategy to adjust culture and behaviors throughout the organization.
Although the tools and practices to build secure, defect-tested software are maturing, most organizations find internal hurdles to be more daunting. Organizational impediments including process, differing software development approaches, and short-term business drivers – like the need to update apps to reflect the latest must-have features and platform support – make it more difficult to effect meaningful change. Those interested in leading secure development initiatives within their organizations face myriad challenges.
Step out of your comfort zone to influence culture, process and strategy change
Often, folks who find themselves in the position to step up to the plate and change culture did not initially plan to do so. The developer who is chosen to lead the charge and establish a process that ensures applications are securely built is extraordinarily smart and devoutly technical. In order to begin changing the world, he or she will need to utilize approaches outside of their comfort zone. Success will rely on the acknowledgement that a full frontal assault on the status quo will not do. Instead, hearts and minds must be won. The skill sets that propel developers’ careers forward must take a back seat to the forces of leadership and persuasion, not coercion, which will ultimately affect the cultural shift in an organization’s application development mindset.
To effectively change the way organizations build software, an enterprise-wide initiative is required – one that accounts for organizational culture at each step. By taking a step-by-step approach, implementing a successful software security initiative does not have to be so daunting. Five proven best practices include taking a disciplined approach by characterizing the landscape, securing champions, defining standards and strategy, executing, and then sustaining the effort. These steps, as outlined below, will help ensure that your corporate-wide efforts to secure applications are as productive as possible.
Characterize the Landscape – Understand the task ahead and craft a realistic strategy for adoption within your organization. Whether that be identifying your organization’s compliance framework and cultural norms, to artifacts of the software security and what software development lifecycles you have in place. Being able to characterize your existing landscape allows you to fill the gap between policy and practice.
Secure Champions – Focus explicitly on the fact that you will need clear support of executive sponsors and other key influencers in the organization in order to be successful. While senior leaders may not understand or care about the minutia of software vulnerabilities, they will appreciate the business impact of a data breach with far-reaching cost, reputation or legal repercussions.
Define Standards and Strategy – You will only have one opportunity to successfully roll out a secure development initiative, meaning you cannot overlook this step. Conduct a risk assessment of applications owned by your organization to identify the most vulnerable applications to provide some qualitative ranking for decision-making. Having a baseline set of practices and procedures that are well thought out, realistically implemented, and reflect what is realistic in your organization is absolutely critical.
Execution – So you’ve done your homework, secured supporters throughout the chain of command and in the field, and laid out your strategy and goals. Now comes the hard part – bringing the issue of software security to the forefront through innovative awareness campaigns. Remember, it will be important to show quick wins, highlight positive behaviors, and do it over and over again; ratcheting up expectations and software security with each iteration.
Sustainment – This follows the successful execution of your software security initiative campaign, which can take between one and two years. To ensure your campaign stays fresh and does not lose momentum, a regular, disciplined update of the regulatory framework must occur. In addition, staying knowledgeable about what is occurring within your organization to determine whether there are new technology risk areas emerging in critical.
Creating a software security initiative is difficult by any measure. From organizational culture or politics to the status quo bias towards meeting deadlines for new features and functionality, there is no limit to the amount of hurdles that need to be overcome. But by demonstrating the importance of looking long term from a security standpoint and winning the hearts and minds of everyone that plays a role, success can be achieved. Organizations that focus only on the tasks ahead of them do not change the world. Long term vision is critical for organizations that endeavor to strategically move, grow and change the status quo.
]]>Coming off the annual Cybersecurity Month in October and having the opportunity to recently speak at CyberMaryland, I’m all “cyber’ed” out. At least I’m painfully aware when it’s used in casual conversation, and I even wince when I use the term “cybersecurity” to describe what I do to the vast unwashed masses. What’s becoming increasingly obvious is that we need a new word for cyber. I want to actively debate this and find an alternative before “cyber” (an adjective, or noun) becomes a verb, as Google is to “googling” something. I never want to hear that a client was “cyber’ed” by a nation state threat, or that someone “cyberfied” their network to make it more resilient to attack. That bleak prospect is so gravely serious that we need to put tongue firmly in cheek and start talking….
As Alcoholics Anonymous and other recovery groups state, admitting you have a problem is the first step towards recovery. Yes, we have a problem. I’ve known this for some time. This fact was driven home to me earlier in the year when a non-security guy stated emphatically, “John, you know it’s not just about cyber, right? It’s about cyber, big data, and cloud?” My initial response was to suggest he add mobile and DevOps, then he would have every buzzword in IT covered. But after my first, and possibly snarkier, response trailed off, I thought serious discourse about the use of the word “cyber” was needed.
By background, I’ve been a security guy for nearly 20 years. That’s how I self-identify, and that’s how people know me. Like Johnny Appleseed, I dispense solicited advice at cocktail parties, family reunions, and at my daughter’s soccer game. I answer questions that range from smartphone security, to when to update one’s Window’s box, to how best to select hard-to-crack passwords. So I’m on the frontline, like all of us who read Dark Reading. It’s in our best interest to have a better term before someone finds a worse term to describe our industry and what we do. To that end, I would humbly submit the following observations and suggestions for further discussion.
Let .gov and .mil guys keep “cyber”
They are comfortable with the term, they use it in conversation without wincing, and would likely be a willing adoptive parent. There is the practical matter that there are so many instances where the term is baked into government code, into signage, into doctrine that a simple name change would cost taxpayers billions. In the military, the term “cyber” has been adopted to mean all things that don’t blow up bad guys. Fighter pilots, infantry officers, and naval officers may not understand what it is, but they do know it might prevent them from getting shot at. One request though. Stop using the term cyber warfighter … As an ex-Air Force Information Warfare Center alumni I’ve never been quite comfortable with the term. Those same folks who have actually been shot at might not be able to stomach the term and you might get your nose punched by a Navy SEAL in a bar talking about how you DDos’ed someone.
Don’t reuse stale terms!
If cyber does a poor job describing what we do, certainly older, well-trodden names are no better. Information security, or InfoSec for short, is seemingly hopelessly stuck in the 90’s. It might have worked then, when the scope was purely about the security of information, but not now. Related terms, like information protection and network security are similarly dated and also too narrow in scope.
The least worst current option – cybersecurity
An acceptable compromise, and one that seems to strike a happy medium, is the term many use to-date, “cybersecurity.” Don’t worry about if it’s one word, two, or hyphenated, it has the word “cyber” in it for the Feds, and “security” in it for most of the commercial types. You can say cybersecurity in a mixed audience and not get groans or a rolling of the eyes by the more grizzled security veterans. As a stopgap measure, cybersecurity works.
In a perfect world – just security
Here’s where I’ve arrived. I call it “security;” no need to further describe or elaborate. I self-identify as a “security guy.” I help clients with security services and product. Given the constant stream of front-page stories, I find security (read cybersecurity) being so mainstream that I don’t have to clarify, or distinguish myself from our physical security brethren. No guns, gates, or guards for me, and no, I’m not a mall cop. So I’m a security professional, providing security services that keep clients out of the news.
No matter what we end up calling it, we need to make sure that those who live and breathe security are the ones who dictate the term that is used. The art of what we do as IT security professionals has evolved into a sophisticated and critical part of everyday culture, not just business. We need to own what we do and come up with a term we can be proud to associate with our work; not one that makes us cringe every time we hear it.
Originally written for Dark Reading
]]>Coming off the annual Cybersecurity Month in October and having the opportunity to recently speak at CyberMaryland, I’m all “cyber’ed” out. At least I’m painfully aware when it’s used in casual conversation, and I even wince when I use the term “cybersecurity” to describe what I do to the vast unwashed masses. What’s becoming increasingly obvious is that we need a new word for cyber. I want to actively debate this and find an alternative before “cyber” (an adjective, or noun) becomes a verb, as Google is to “googling” something. I never want to hear that a client was “cyber’ed” by a nation state threat, or that someone “cyberfied” their network to make it more resilient to attack. That bleak prospect is so gravely serious that we need to put tongue firmly in cheek and start talking….
As Alcoholics Anonymous and other recovery groups state, admitting you have a problem is the first step towards recovery. Yes, we have a problem. I’ve known this for some time. This fact was driven home to me earlier in the year when a non-security guy stated emphatically, “John, you know it’s not just about cyber, right? It’s about cyber, big data, and cloud?” My initial response was to suggest he add mobile and DevOps, then he would have every buzzword in IT covered. But after my first, and possibly snarkier, response trailed off, I thought serious discourse about the use of the word “cyber” was needed.
By background, I’ve been a security guy for nearly 20 years. That’s how I self-identify, and that’s how people know me. Like Johnny Appleseed, I dispense solicited advice at cocktail parties, family reunions, and at my daughter’s soccer game. I answer questions that range from smartphone security, to when to update one’s Window’s box, to how best to select hard-to-crack passwords. So I’m on the frontline, like all of us who read Dark Reading. It’s in our best interest to have a better term before someone finds a worse term to describe our industry and what we do. To that end, I would humbly submit the following observations and suggestions for further discussion.
Let .gov and .mil guys keep “cyber”
They are comfortable with the term, they use it in conversation without wincing, and would likely be a willing adoptive parent. There is the practical matter that there are so many instances where the term is baked into government code, into signage, into doctrine that a simple name change would cost taxpayers billions. In the military, the term “cyber” has been adopted to mean all things that don’t blow up bad guys. Fighter pilots, infantry officers, and naval officers may not understand what it is, but they do know it might prevent them from getting shot at. One request though. Stop using the term cyber warfighter … As an ex-Air Force Information Warfare Center alumni I’ve never been quite comfortable with the term. Those same folks who have actually been shot at might not be able to stomach the term and you might get your nose punched by a Navy SEAL in a bar talking about how you DDos’ed someone.
Don’t reuse stale terms!
If cyber does a poor job describing what we do, certainly older, well-trodden names are no better. Information security, or InfoSec for short, is seemingly hopelessly stuck in the 90’s. It might have worked then, when the scope was purely about the security of information, but not now. Related terms, like information protection and network security are similarly dated and also too narrow in scope.
The least worst current option – cybersecurity
An acceptable compromise, and one that seems to strike a happy medium, is the term many use to-date, “cybersecurity.” Don’t worry about if it’s one word, two, or hyphenated, it has the word “cyber” in it for the Feds, and “security” in it for most of the commercial types. You can say cybersecurity in a mixed audience and not get groans or a rolling of the eyes by the more grizzled security veterans. As a stopgap measure, cybersecurity works.
In a perfect world – just security
Here’s where I’ve arrived. I call it “security;” no need to further describe or elaborate. I self-identify as a “security guy.” I help clients with security services and product. Given the constant stream of front-page stories, I find security (read cybersecurity) being so mainstream that I don’t have to clarify, or distinguish myself from our physical security brethren. No guns, gates, or guards for me, and no, I’m not a mall cop. So I’m a security professional, providing security services that keep clients out of the news.
No matter what we end up calling it, we need to make sure that those who live and breathe security are the ones who dictate the term that is used. The art of what we do as IT security professionals has evolved into a sophisticated and critical part of everyday culture, not just business. We need to own what we do and come up with a term we can be proud to associate with our work; not one that makes us cringe every time we hear it.
Originally written for Dark Reading
]]>Background
During static analysis, one of the things the application security team checks for is strong random number generation for security sensitive contexts. We see weaknesses in this space quite often for temporary passwords and session identifiers, but an increasingly common variant is for universally unique identifiers (UUIDs).
The proposed UUID standard describes a UUID as:
“…128 bits long, and can guarantee uniqueness across space and time. UUIDs were originally used in the Apollo Network Computing System and later in the Open Software Foundation’s (OSF) Distributed Computing Environment (DCE), and then in Microsoft Windows platforms.”
RFC 4122
They used them on the Apollo mission, how neat is that?
However, in a security context these values are not necessarily “guaranteed unique.” A hash collision can be caused due to the fact that the identifiers have a finite size, which means it is therefore possible for two entities to generate the same identifier. The generation process, or algorithm, needs to be selected so as to make this sufficiently improbable in practice.
Implementation
The generation process typically involves random number generation. RFC 4122, the RFC defining the UUID standard, recommends using a cryptographic-grade random number generator for the purposes of generating UUIDs (RFC 4122, p. 14). Using a statistical pseudo random number generator (PRNG) instead will pose the following problems:
Cryptographically secure PRNGs are designed to be non-reproducible, even if the attacker has knowledge of the algorithm in use. Furthermore, cryptographic PRNGs are designed to maintain much more internal state, often incorporating non-deterministic system parameters and hardware-based random sources.
One area where cryptographic PRNGs are difficult to implement correctly is the client side of web application systems. For example, consider this code from the Apache Cordova library (version 3.8.0):
function UUIDcreatePart(length) { var uuidpart = ""; for (var i=0; i < length; i++) { var uuidchar = parseInt((Math.random() * 256), 10).toString(16); if (uuidchar.length == 1) { uuidchar = "0" + uuidchar; } uuidpart += uuidchar; } return uuidpart; }
Whether or not the above code is immediately exploitable depends on how the UUID is used. However, the Math.Random() function is an example of a potentially insecure statistical pseudo-random number generator (PRNG).
PRNGs generate output that is ostensibly random and unpredictable. However, while their output may satisfy certain basic statistical properties, a skilled attacker can still predict the sequence of generated random values if they have basic knowledge of the algorithm used. Furthermore, statistical PRNGs typically contain very little state; their output is often based on the system date and time.
Conclusions
Under ordinary conditions, the probability of having two generated UUIDs collide is very small. However, if UUIDs are generated improperly, the probability of a collision increases considerably. Unfortunately, neither JavaScript nor Apache Cordova provide a standard way of generating cryptography-grade random values. Therefore, developers and operators must be cautious about accepting client-generated UUIDs by any server-side application.
In closing, keep the following guidelines in mind:
UUID generation and management is going to be a growing space for security researchers and professionals due to their prevalence in Internet of Things (IoT) capable devices. For an interesting read on improper UUID management, please see the following article on Hackable Google Beacons made by Estimote, wherein the company tried to secure it’s implementation but still wound up vulnerable to Denial of Service and False Flag attacks.
Co-Written by William Thornton and David Malloy
]]>Background
During static analysis, one of the things the application security team checks for is strong random number generation for security sensitive contexts. We see weaknesses in this space quite often for temporary passwords and session identifiers, but an increasingly common variant is for universally unique identifiers (UUIDs).
The proposed UUID standard describes a UUID as:
“…128 bits long, and can guarantee uniqueness across space and time. UUIDs were originally used in the Apollo Network Computing System and later in the Open Software Foundation’s (OSF) Distributed Computing Environment (DCE), and then in Microsoft Windows platforms.”
RFC 4122
They used them on the Apollo mission, how neat is that?
However, in a security context these values are not necessarily “guaranteed unique.” A hash collision can be caused due to the fact that the identifiers have a finite size, which means it is therefore possible for two entities to generate the same identifier. The generation process, or algorithm, needs to be selected so as to make this sufficiently improbable in practice.
Implementation
The generation process typically involves random number generation. RFC 4122, the RFC defining the UUID standard, recommends using a cryptographic-grade random number generator for the purposes of generating UUIDs (RFC 4122, p. 14). Using a statistical pseudo random number generator (PRNG) instead will pose the following problems:
Cryptographically secure PRNGs are designed to be non-reproducible, even if the attacker has knowledge of the algorithm in use. Furthermore, cryptographic PRNGs are designed to maintain much more internal state, often incorporating non-deterministic system parameters and hardware-based random sources.
One area where cryptographic PRNGs are difficult to implement correctly is the client side of web application systems. For example, consider this code from the Apache Cordova library (version 3.8.0):
function UUIDcreatePart(length) { var uuidpart = ""; for (var i=0; i < length; i++) { var uuidchar = parseInt((Math.random() * 256), 10).toString(16); if (uuidchar.length == 1) { uuidchar = "0" + uuidchar; } uuidpart += uuidchar; } return uuidpart; }
Whether or not the above code is immediately exploitable depends on how the UUID is used. However, the Math.Random() function is an example of a potentially insecure statistical pseudo-random number generator (PRNG).
PRNGs generate output that is ostensibly random and unpredictable. However, while their output may satisfy certain basic statistical properties, a skilled attacker can still predict the sequence of generated random values if they have basic knowledge of the algorithm used. Furthermore, statistical PRNGs typically contain very little state; their output is often based on the system date and time.
Conclusions
Under ordinary conditions, the probability of having two generated UUIDs collide is very small. However, if UUIDs are generated improperly, the probability of a collision increases considerably. Unfortunately, neither JavaScript nor Apache Cordova provide a standard way of generating cryptography-grade random values. Therefore, developers and operators must be cautious about accepting client-generated UUIDs by any server-side application.
In closing, keep the following guidelines in mind:
UUID generation and management is going to be a growing space for security researchers and professionals due to their prevalence in Internet of Things (IoT) capable devices. For an interesting read on improper UUID management, please see the following article on Hackable Google Beacons made by Estimote, wherein the company tried to secure it’s implementation but still wound up vulnerable to Denial of Service and False Flag attacks.
Co-Written by William Thornton and David Malloy
]]>The members of the ThreadFix team have often found themselves face-to-face with a fairly universal need across software groups: to quickly access running application instances. This need applies to groups from developers to support engineers to quality assurance personnel. It can require the latest and greatest code that developers have been working on or the most recent stable release that is in the hands of customers. Through a container system we built around Docker, simply referred to internally as “ThreadFix + Docker”, the fulfillment of this need is easier than ever.
Components
This section will provide an overview of the components that make up the ThreadFix + Docker system.
Docker and the Docker Daemon
The largest and most integral piece of this system is the Docker daemon that sits on a remote Ubuntu VM. This process and its associated files comprise the backend of ThreadFix + Docker.
As a brief rundown of our use case, Docker is a tool that allows us to dynamically generate “containers”, or lightweight independent spaces that can coexist on a single running machine. These allow us to host individual ThreadFix instances without the overhead of full-on virtual machines and their associated management burden.
The Docker daemon runs on the host VM and awaits commands for spinning up new application containers. When it receives the proper command, it uses one of the “images” it has access to in order to create a running container based on that configuration. A Docker image is a representation of the environment that a container will have once it is running, and since the image was first generated by walking step-by-step through a configuration file (called a “Dockerfile”) and then saving its last state, the system can spin up ready-to-use containers fairly quickly. We will cover how these images are generated in the “Jenkins Continuous Integration” section later in this article.
Here is a simple example of a Dockerfile.
FROM tomcat:7.0.65-jre7 ADD ./threadfix /usr/local/tomcat/webapps/threadfix LABEL branch="Dev-QA" LABEL version="Enterprise"
And here is how the available images are displayed in the ThreadFix + Docker UI, along with their creation dates:
The Docker API
Docker provides a robust API available via REST calls. Through a simple configuration change, we expose our Docker daemon’s API on a specific Unix port of our host VM. There are two components of ThreadFix + Docker that will communicate with our running Docker process via this channel.
Management Shell Script
An intuitive and interactive shell script communicates with the host VM’s Docker process in order to create or kill containers. Since this script uses Unix’s curl program to communicate with the Docker process via REST calls, this script can be utilized from a user’s machine and does not have to be executed on the host machine itself. This script allows you to designate options such as:
The display name of the container (for the AngularJS UI).
The version and git branch of ThreadFix to use (Community or Enterprise, Development or Stable, etc.).
The port on the host VM to expose this application instance.
The specific database files that the ThreadFix instance should use.
The database action to call (“create” or “update”).
Thin AngularJS Client
The main component that most ThreadFix + Docker end users interact with is the web UI, which is constructed with AngularJS. This thin front-end client communicates with the host VM’s Docker process directly through GET calls in order to populate information about running containers and available images.
For each running container, there is a link showing the port number on the host VM that the ThreadFix instance is exposed on, and that link will take the user directly to the URL that represents that instance’s ThreadFix homepage. Additionally, there is a link to view the application logs from that container, which will open up the log in a new tab. This comes in handy when Support or QA is trying to find or replicate an issue. Lastly, icons display the current database being used by a container, and a warning icon will caution the user that the container is not built upon that version’s most recent image and that they are likely working with old code.
Funnily enough, the ThreadFix + Docker Web UI is itself hosted on a running Docker container.
Jenkins Continuous Integration
The last piece of the puzzle is integrating this system with our Jenkins continuous integration jobs. We take advantage of our existing CI jobs, specifically those which build ThreadFix artifacts after code changes and then run unit tests against them to verify the code’s quality. These jobs have been modified to copy the built artifact over to the VM hosting the Docker process, then execute a script to build a new Docker image for a particular version of ThreadFix. This way, when a user spins up a ThreadFix Docker instance, they can be sure they’re getting the latest approved code and that they’re getting it almost instantly.
Behind the Scenes
Now we’ll cover a bit of the process behind ThreadFix + Docker. When a ThreadFix container is spun up via the manager script, the REST call passes in several runtime parameters to configure the container and provide metadata for the UI.
The port number passed into the script maps the exposed ThreadFix application port (8080) from within the container to the specified port on the host VM. This is what allows users to access their instances on different host ports simultaneously.
The version and branch of ThreadFix used (Community or Enterprise, Stable or Development) lets the Docker process know which Docker image to use when spinning up the container. As stated above, our Jenkins jobs ensure that these images are up-to-date.
The database name parameters looks for a similarly-named directory in a dedicated database directory on the host VM. If the directory does not exist, it is created. The ThreadFix containers take advantage of these databases by attaching them as “volumes” to a specific directory within the container. In Docker vernacular, a “volume” is a file path on a host machine that is realtime symlinked to a specific path within a container. In this case, that container path is the location where the ThreadFix application reads and/or generates its HSQL database files. Now if say, the power goes out or you want to restart your container with the newest code, you can spin up a container and attach the same database directory as a volume, and you’ll pick up right where you left off with all your data intact.
The database action parameter also takes advantage of volumes. If you designate “create” as your database action, ThreadFix + Docker will replace the default jdbc.properties files (which designates ThreadFix’s database configuration) with a file from a specific jdbc directory on the host VM called “create.jdbc.properties”. Similarly, “update” will use a file called “update.jdbc.properties” to allow you to use established data if you’ve designated a database.
Finally, it is important to keep in mind that ThreadFix + Docker does not interface with an independent backend, but instead communicates with the Docker process directly. To store and retrieve container metadata, we rely on the use of Docker “labels”. These are key-value pairs that you can either designate at runtime when spinning up a container or in the Dockerfiles that configure your images. These labels are later queried and parsed by the web UI to show information like display name, version, branch, etc.
Here is an excerpt from the manager script showing how the JSON for creating containers is crafted:
# Craft JSON Data for Create Call json="{"OpenStdin": true, "Image": "threadfix/${version}", "Tty": true, "Labels": {"user":"${name}", "db": "${database}", "dbMethod": "${dbMethod}"},"HostConfig": {${databaseJson} "PortBindings": { "8080/tcp": [{ "HostPort": "${port}" }]}, "DnsSearch": ["denimgroup.com"]}}"
Conclusion
That about wraps up the overview of the ThreadFix + Docker system. There are several extra use cases we’ve run into (almost “easter eggs”) that we hope to streamline, such as connecting a remote ThreadFix container to a locally-hosted MySQL instance to query a database in realtime, or spinning up a background SQL Server build instance to prepare for database provider testing.
As it stands now though, ThreadFix + Docker has significantly decreased the time and effort it takes to access robust and up-to-date ThreadFix instances, from sometimes 10+ minutes for the uninitiated to now 30 seconds. Whether it’s developing third-party integrations, triaging user issues, tracking down bugs during quality assurance runs, or onboarding new team members, leveraging Docker and other connected technologies has helped us toward accomplishing a crucial goal: making it easier to make ThreadFix better.
]]>The members of the ThreadFix team have often found themselves face-to-face with a fairly universal need across software groups: to quickly access running application instances. This need applies to groups from developers to support engineers to quality assurance personnel. It can require the latest and greatest code that developers have been working on or the most recent stable release that is in the hands of customers. Through a container system we built around Docker, simply referred to internally as “ThreadFix + Docker”, the fulfillment of this need is easier than ever.
Components
This section will provide an overview of the components that make up the ThreadFix + Docker system.
Docker and the Docker Daemon
The largest and most integral piece of this system is the Docker daemon that sits on a remote Ubuntu VM. This process and its associated files comprise the backend of ThreadFix + Docker.
As a brief rundown of our use case, Docker is a tool that allows us to dynamically generate “containers”, or lightweight independent spaces that can coexist on a single running machine. These allow us to host individual ThreadFix instances without the overhead of full-on virtual machines and their associated management burden.
The Docker daemon runs on the host VM and awaits commands for spinning up new application containers. When it receives the proper command, it uses one of the “images” it has access to in order to create a running container based on that configuration. A Docker image is a representation of the environment that a container will have once it is running, and since the image was first generated by walking step-by-step through a configuration file (called a “Dockerfile”) and then saving its last state, the system can spin up ready-to-use containers fairly quickly. We will cover how these images are generated in the “Jenkins Continuous Integration” section later in this article.
Here is a simple example of a Dockerfile.
FROM tomcat:7.0.65-jre7 ADD ./threadfix /usr/local/tomcat/webapps/threadfix LABEL branch="Dev-QA" LABEL version="Enterprise"
And here is how the available images are displayed in the ThreadFix + Docker UI, along with their creation dates:
The Docker API
Docker provides a robust API available via REST calls. Through a simple configuration change, we expose our Docker daemon’s API on a specific Unix port of our host VM. There are two components of ThreadFix + Docker that will communicate with our running Docker process via this channel.
Management Shell Script
An intuitive and interactive shell script communicates with the host VM’s Docker process in order to create or kill containers. Since this script uses Unix’s curl program to communicate with the Docker process via REST calls, this script can be utilized from a user’s machine and does not have to be executed on the host machine itself. This script allows you to designate options such as:
The display name of the container (for the AngularJS UI).
The version and git branch of ThreadFix to use (Community or Enterprise, Development or Stable, etc.).
The port on the host VM to expose this application instance.
The specific database files that the ThreadFix instance should use.
The database action to call (“create” or “update”).
Thin AngularJS Client
The main component that most ThreadFix + Docker end users interact with is the web UI, which is constructed with AngularJS. This thin front-end client communicates with the host VM’s Docker process directly through GET calls in order to populate information about running containers and available images.
For each running container, there is a link showing the port number on the host VM that the ThreadFix instance is exposed on, and that link will take the user directly to the URL that represents that instance’s ThreadFix homepage. Additionally, there is a link to view the application logs from that container, which will open up the log in a new tab. This comes in handy when Support or QA is trying to find or replicate an issue. Lastly, icons display the current database being used by a container, and a warning icon will caution the user that the container is not built upon that version’s most recent image and that they are likely working with old code.
Funnily enough, the ThreadFix + Docker Web UI is itself hosted on a running Docker container.
Jenkins Continuous Integration
The last piece of the puzzle is integrating this system with our Jenkins continuous integration jobs. We take advantage of our existing CI jobs, specifically those which build ThreadFix artifacts after code changes and then run unit tests against them to verify the code’s quality. These jobs have been modified to copy the built artifact over to the VM hosting the Docker process, then execute a script to build a new Docker image for a particular version of ThreadFix. This way, when a user spins up a ThreadFix Docker instance, they can be sure they’re getting the latest approved code and that they’re getting it almost instantly.
Behind the Scenes
Now we’ll cover a bit of the process behind ThreadFix + Docker. When a ThreadFix container is spun up via the manager script, the REST call passes in several runtime parameters to configure the container and provide metadata for the UI.
The port number passed into the script maps the exposed ThreadFix application port (8080) from within the container to the specified port on the host VM. This is what allows users to access their instances on different host ports simultaneously.
The version and branch of ThreadFix used (Community or Enterprise, Stable or Development) lets the Docker process know which Docker image to use when spinning up the container. As stated above, our Jenkins jobs ensure that these images are up-to-date.
The database name parameters looks for a similarly-named directory in a dedicated database directory on the host VM. If the directory does not exist, it is created. The ThreadFix containers take advantage of these databases by attaching them as “volumes” to a specific directory within the container. In Docker vernacular, a “volume” is a file path on a host machine that is realtime symlinked to a specific path within a container. In this case, that container path is the location where the ThreadFix application reads and/or generates its HSQL database files. Now if say, the power goes out or you want to restart your container with the newest code, you can spin up a container and attach the same database directory as a volume, and you’ll pick up right where you left off with all your data intact.
The database action parameter also takes advantage of volumes. If you designate “create” as your database action, ThreadFix + Docker will replace the default jdbc.properties files (which designates ThreadFix’s database configuration) with a file from a specific jdbc directory on the host VM called “create.jdbc.properties”. Similarly, “update” will use a file called “update.jdbc.properties” to allow you to use established data if you’ve designated a database.
Finally, it is important to keep in mind that ThreadFix + Docker does not interface with an independent backend, but instead communicates with the Docker process directly. To store and retrieve container metadata, we rely on the use of Docker “labels”. These are key-value pairs that you can either designate at runtime when spinning up a container or in the Dockerfiles that configure your images. These labels are later queried and parsed by the web UI to show information like display name, version, branch, etc.
Here is an excerpt from the manager script showing how the JSON for creating containers is crafted:
# Craft JSON Data for Create Call json="{"OpenStdin": true, "Image": "threadfix/${version}", "Tty": true, "Labels": {"user":"${name}", "db": "${database}", "dbMethod": "${dbMethod}"},"HostConfig": {${databaseJson} "PortBindings": { "8080/tcp": [{ "HostPort": "${port}" }]}, "DnsSearch": ["denimgroup.com"]}}"
Conclusion
That about wraps up the overview of the ThreadFix + Docker system. There are several extra use cases we’ve run into (almost “easter eggs”) that we hope to streamline, such as connecting a remote ThreadFix container to a locally-hosted MySQL instance to query a database in realtime, or spinning up a background SQL Server build instance to prepare for database provider testing.
As it stands now though, ThreadFix + Docker has significantly decreased the time and effort it takes to access robust and up-to-date ThreadFix instances, from sometimes 10+ minutes for the uninitiated to now 30 seconds. Whether it’s developing third-party integrations, triaging user issues, tracking down bugs during quality assurance runs, or onboarding new team members, leveraging Docker and other connected technologies has helped us toward accomplishing a crucial goal: making it easier to make ThreadFix better.
]]>I had the unique opportunity last week to participate in a daylong policy discussion titled “A Symposium on Cybersecurity and Privacy: What the Public Sector Can Learn from the Private Sector” hosted by the Texas Tribune. The Texas Tribune is the only member-supported, digital-first, nonpartisan media organization that informs Texans — and engages with them — about public policy, politics, government, and statewide issues. The backdrop for the symposium was the University of Texas at San Antonio, San Antonio’s largest public university and one of the largest collections of undergraduate and graduate programs addressing cybersecurity.
Encouraged by Dr. Romo’s right hand man at UTSA, Albert Carrisalez, Texas Tribune CEO and Editor-in-Chief Evan Smith made the decision to host its first event on the policy aspects of cybersecurity and to answer the question – Is our state prepared for a cyber attack? Evan reached out to me in early November to get input from someone in the trenches. A wildly talented moderator and interviewer, Evan wanted some panel ideas to provoke discussion and provide key insights into the cybersecurity policy world for participants and audience members alike.
Given my role as Chairman of Cybersecurity San Antonio, I jumped at the idea to help – I very much wanted the Tribune’s first event based on cybersecurity to be in San Antonio, given our city’s critical mass of DoD, corporate cybersecurity assets, and UTSA’s decades long leadership within the UT System on the topic. I provided Evan some ideas for panel discussions on federal, local, and privacy matters. I also suggested the symposium include a panel made up of commercial security experts who might be able to characterize some of the bleeding edge security challenges they encounter.
The symposium itself shaped up nicely, including panels on the following topics:
The complete agenda can be found here.
There were some interesting takeaways from the symposium, including a few highlights below:
The panel I was on wrapped up the symposium and focused on what the State of Texas, as well as other governments, could learn from the commercial sector. The assumption was the many commercial organization are on the leading edge of cybersecurity practices, given the constant attacks they endure and the sophisticated nature of the attackers. We were lucky to get Vic Diaz from USAA and Paul Williams from Rackspace to represent. Perhaps not so unusual, since the three of us are in San Antonio and in the security business, all three of us came from the Air Force (as Evan highlighted in the introductions).
Evan jumped right into eliciting responses from the three of us. Some of the key takeaways of our panel were:
All in all, the symposium provided public focus at the state level on the important aspects of cybersecurity. Although there’s much chatter at the federal level, there’s been far less dialogue about such matters at the state level. The Texas Tribune did a great job taking cybersecurity and pulling it into the center of policy discussions during its first event on the topic. Whether or not the participants sufficiently answered the question (“What the Public Sector Can Learn from the Private Sector”) is to be seen, but certainly elected and appointed officials in Austin will likely take more notice. In that regard, the Tribune’s event was a success.
In addition, Lynn Brezosky from the San Antonio Express-News wrote a great recap.
]]>I had the unique opportunity last week to participate in a daylong policy discussion titled “A Symposium on Cybersecurity and Privacy: What the Public Sector Can Learn from the Private Sector” hosted by the Texas Tribune. The Texas Tribune is the only member-supported, digital-first, nonpartisan media organization that informs Texans — and engages with them — about public policy, politics, government, and statewide issues. The backdrop for the symposium was the University of Texas at San Antonio, San Antonio’s largest public university and one of the largest collections of undergraduate and graduate programs addressing cybersecurity.
Encouraged by Dr. Romo’s right hand man at UTSA, Albert Carrisalez, Texas Tribune CEO and Editor-in-Chief Evan Smith made the decision to host its first event on the policy aspects of cybersecurity and to answer the question – Is our state prepared for a cyber attack? Evan reached out to me in early November to get input from someone in the trenches. A wildly talented moderator and interviewer, Evan wanted some panel ideas to provoke discussion and provide key insights into the cybersecurity policy world for participants and audience members alike.
Given my role as Chairman of Cybersecurity San Antonio, I jumped at the idea to help – I very much wanted the Tribune’s first event based on cybersecurity to be in San Antonio, given our city’s critical mass of DoD, corporate cybersecurity assets, and UTSA’s decades long leadership within the UT System on the topic. I provided Evan some ideas for panel discussions on federal, local, and privacy matters. I also suggested the symposium include a panel made up of commercial security experts who might be able to characterize some of the bleeding edge security challenges they encounter.
The symposium itself shaped up nicely, including panels on the following topics:
The complete agenda can be found here.
There were some interesting takeaways from the symposium, including a few highlights below:
The panel I was on wrapped up the symposium and focused on what the State of Texas, as well as other governments, could learn from the commercial sector. The assumption was the many commercial organization are on the leading edge of cybersecurity practices, given the constant attacks they endure and the sophisticated nature of the attackers. We were lucky to get Vic Diaz from USAA and Paul Williams from Rackspace to represent. Perhaps not so unusual, since the three of us are in San Antonio and in the security business, all three of us came from the Air Force (as Evan highlighted in the introductions).
Evan jumped right into eliciting responses from the three of us. Some of the key takeaways of our panel were:
All in all, the symposium provided public focus at the state level on the important aspects of cybersecurity. Although there’s much chatter at the federal level, there’s been far less dialogue about such matters at the state level. The Texas Tribune did a great job taking cybersecurity and pulling it into the center of policy discussions during its first event on the topic. Whether or not the participants sufficiently answered the question (“What the Public Sector Can Learn from the Private Sector”) is to be seen, but certainly elected and appointed officials in Austin will likely take more notice. In that regard, the Tribune’s event was a success.
In addition, Lynn Brezosky from the San Antonio Express-News wrote a great recap.
]]>Today I delivered a webinar on mobile application security and, specifically, on how the iOS and Android platforms handle security. Slides and audio are online here:
The goal of the webinar was twofold:
The webinar went a little long because we had a great Q&A session at the end. Please feel free to post any additional questions in the comments and we’ll get them responded to. Be sure to download the Secure Mobile Application Development Reference whitepaper that provides additional details about iOS and Android security.
Also, check out John Dickson’s recent webinar on building a mobile application security program.
Contact us for help securing your mobile application portfolio.
]]>Today I delivered a webinar on mobile application security and, specifically, on how the iOS and Android platforms handle security. Slides and audio are online here:
The goal of the webinar was twofold:
The webinar went a little long because we had a great Q&A session at the end. Please feel free to post any additional questions in the comments and we’ll get them responded to. Be sure to download the Secure Mobile Application Development Reference whitepaper that provides additional details about iOS and Android security.
Also, check out John Dickson’s recent webinar on building a mobile application security program.
Contact us for help securing your mobile application portfolio.
]]>