Is My Code Secure?

 

The following is a transcript of the “Is My Code Secure?” panel - it has been minimally edited for clarity. 

The Secure Coding Misperception

Gaby: There's a significant problem, right? Developers are intelligent individuals. Many of us who manage development teams have people with over 20 years of experience. You would assume they know what they're doing, right? If you were to ask them, would they admit to potential security issues? How concerned should you really be about this problem?

Itzik: From my perspective, developers possess a broad understanding of various aspects, including security at a high level. Many of them can discuss different vulnerabilities or the OWASP Top 10, for example. They probably have an idea about what those are, but the depth of their knowledge is often the issue. It's one thing to know a lot about SQL injection, but there are other types of injections and numerous ways to approach injection issues. If you don't delve deep enough, you're missing a lot.

Another issue is that developers, by nature, are builders. They focus on constructing things, releasing them efficiently, writing clean and efficient code, meeting deadlines, and ensuring a successful market launch. If they're good developers, they excel at these tasks. However, what they often overlook is identifying potential vulnerabilities or "cracks" in their code. It's not part of their standard thought process; they build without thinking about the security aspects. This, in my opinion, is a significant problem. As developers, we need to allocate more time to develop our understanding of how our code can be vulnerable and where those vulnerabilities might exist.

Gaby: Gil, do you have anything to add to that?

Gil: Yeah, I want to share a story. In my previous role supervising 700 engineers and overseeing global product security for an American company, we were responsible for training developers. During one training session for 200 developers, I emphasized the importance of never trusting input into their code, as malicious input can exploit vulnerabilities and take control of the application.

After the session, someone approached me, questioning the trust issue and why they couldn't trust their own code. He mentioned that as both the server and client, he trusted his own code. This was a moment of realization for me, as I explained that a cybercriminal could act on behalf of the client by reverse engineering, understanding protocols, and creating malicious inputs for the server to attack. This understanding marked a shift in his thought process, making him more cautious and thorough in verifying all external inputs, regardless of the source.

When Is the Right Time to Think About Security as a vendor?

Gaby: You know, I noticed this most during Capture The Flag (CTF) challenges. As we aimed to make them more challenging for developers, even those with 20 years of experience, we found that they struggled because they aren't naturally inclined to think in the way hackers do. We had to lower the difficulty level because, for hackers, it's easy. But for developers, it's a different mindset. So, when do you think it's the right time to start thinking about security? Especially from a vendor's perspective, when do customers typically approach you? Is it at the beginning of a project or when they're more mature? When do you think they should start considering these aspects?

Naor: It essentially depends on the organization itself. Many of our customers are startups, particularly small startups that prioritize security from the outset. Security is a top concern for these security-oriented startups. However, in other sectors, security may not be the primary focus. Often, organizations start considering security when they aim to work with clients requiring SOC 2 compliance or other regulatory standards. That's when they begin contemplating security tools and strategies to meet compliance requirements.

I believe that the earlier an organization starts thinking about security, the better. Just like technical debt in code, delaying consideration of security accumulates security debt and vulnerabilities. It becomes challenging to address all vulnerabilities later on. Identifying and addressing them early in the development lifecycle allows for faster risk mitigation.

Bring Secure Coding To The Table When Writing User Stories

Gaby: Additionally, this can be a cost-effective approach from the beginning. Robbie, you recently discussed the concept of "shift left" and incorporating security into user stories in a video. Could you elaborate on this? It seems crucial to understand how people approach security, and it doesn't necessarily require expensive tools.

Robbe Van Roey: Absolutely. In security, vulnerabilities can be discovered at various stages. Discovering a flaw in production is costly, requiring urgent fixes. If an attacker finds a bug in production, the consequences are even more severe. Identifying a bug during the QA process is better but still incurs time and money. The cheapest way to prevent bugs is to catch them during the design process.

When writing user stories, involving sales and business requirements, add security as a stakeholder. Define tasks for security, specifying how user input should be handled, access restrictions between tenants, or file storage requirements. This enables the integration of security requirements into unit tests. If vulnerabilities are still found, it helps determine if there were gaps in the design process or if it was a developer oversight.

This approach is cost-effective as it doesn't necessitate expensive tools. It can be implemented straightforwardly, making it a valuable tool for improving security practices.

Who's Responsible For Ensuring Security In Design?

Gaby: But who takes responsibility for this? Product teams typically focus on features when creating user stories. They might not consider security aspects like failed login attempts or other safeguards. In a smaller company, who is responsible for writing these security-focused user stories and incorporating this mindset into the sprint?

Gil: Yeah, I just want to emphasize that in startups, especially small companies, their primary goal is to secure business. If security measures are perceived as hindering deliveries and acquiring more customers, they are likely to be neglected. When it comes to security, I see it from two perspectives: from the management side, such as product managers or founders in small companies, and from the developers' side.

Experienced developers may already be aware of best practices against common vulnerabilities like SQL injections or XSS. However, without someone advocating for more security awareness, it might be overlooked. The absence of a security focus during development can lead to detrimental outcomes, ranging from website takedowns to customer data breaches.

Naor: Additionally, there are numerous open-source and free tools available for use during coding and continuous integration (CI) phases. Implementing automatic gates at various development stages, starting from the integrated development environment (IDE) to pull request scans, helps ensure that code intended for production is free from vulnerabilities.

Gaby: Taking ownership is crucial. It starts with someone on the team, whether in security or a founder, saying, "We care about security." This commitment should extend to incorporating security-related user stories in every sprint, making it a habitual practice alongside QA involvement.

Security Champions within the Developer Team

Robbe Van Roey: A current trend is appointing a security champion within the company. This person, not necessarily a security expert, becomes the voice of security in the development team. They consistently advocate for security considerations, review code, and assist developers in thinking about security aspects from the beginning.

Itzik: I fully agree with having someone in charge. They don't necessarily need to be security experts initially. By assigning them responsibility, they can gradually learn and gain a deeper understanding. This approach involves them in code reviews, helps address questions from developers, and ensures a proactive approach to security.

Gaby: So, as a first tip, appointing someone to take on this role is crucial.

Gil: Gaby, maybe I can add something because I want to give a concrete answer. Appointing someone is definitely the way to go, but sometimes that person might lack the experience needed. As Noar said, you can start by using some open-source tools. They are free, and you don't need to pay for them in your CI/CD. These tools can help raise the quality of your code.

Another aspect to consider is the difficulty of defending in scale when writing code. It's challenging to create effective defenses compared to hacking into a system. For example, consider a scenario where the price of an item in an e-commerce checkout page is retrieved from the client-side HTML, and someone can manipulate it to pay zero dollars. It's not just a vulnerability but a flaw in the application's design, emphasizing the importance of thinking about security from the beginning.

Gaby: I agree. With the advancement of AI and machine learning, parameters are becoming more text-based, making them harder to sanitize. This could be a topic for a dedicated webinar. We should be cautious about security not only in terms of vulnerabilities but also in how we design our systems.

Understanding Your Security Risk

Gaby: Now, shifting the focus, let's say a customer or a regulatory requirement like SOC 2 or PCI demands security measures. Where do we start? How big is our security problem? Do we begin with a scanner, bug bounty, or a pen tester? How can we assess the extent of our security issues to plan accordingly?

Gil: Before diving into security measures, it's crucial to understand the goal of security. Ultimately, we aim to build a resilient system that can serve our customers effectively. However, securing everything is a formidable task, so we need to prioritize. Start with mapping the assets you want to secure and determine which ones to address first.

Ask questions like whether your public APIs are exposed to the internet or if there's an internal database within your VPC. Understand the assets and their sensitivity, such as collecting sensitive customer data like KYC documents. Once you have this list, you can start tackling security measures.

Now, going back to your original question, consider outsourcing the help of a penetration tester. This external expert can assist in mapping your assets, perform threat modeling, and help identify the best areas for return on investment. Security is an ongoing process, and external expertise can guide you in understanding where to start and where to focus your efforts.

Gaby: I appreciate this answer because it aligns with the fundamental principles of security—starting with a risk assessment, identifying critical assets, and then aligning priorities based on the potential impact. Knowing the risks beforehand allows you to make informed decisions about where to invest in security measures. Now that we are aware of the risks, how do we gauge the size of our security problem? Any insights from the team?

How To Gauge how secure your code is?

Robbe Van Roey: I believe that with these three options you mentioned, or any other option, the order in which you approach them matters. When considering whether to use a code-checking tool, engage a pen tester, or start a Bug Bounty program, I suggest following a sequence.

The initial step should be using code scanning tools on your code. While these tools are improving, they are not flawless. However, they can provide a view of the low-hanging fruits in your security profile. Identifying patterns of issues, such as multiple SQL injection problems, allows you to focus on areas that need attention. Once you complete this step, move on to a pen test.

A pen test involves a real person, a hacker, probing your application. Although they won't find everything, they can delve deeper than automated tools. Engage with the pen tester, ask questions, and use their insights to guide further investments. Only when you've addressed the findings from code scanning and pen testing should you consider Bug Bounty programs.

Bug Bounty programs are suitable for very mature companies. They involve opening up your systems to external hackers who get rewarded for identifying vulnerabilities. This should come later in your security journey, as starting too early might overwhelm you with reports, leading to higher costs.

Gaby: I agree. It's probably worth having both pen testing and Bug Bounty, as they serve different purposes. Pen testing might find vulnerabilities that are not necessarily exploitable, while Bug Bounty tends to focus on exploitable issues. However, Bug Bounty is more costly, so it's wise to be more mature and address the low-hanging fruit through pen testing before starting a Bug Bounty program.

WAFs are Good But They Aren't The Only Security Layer Needed

Robbe Van Roey: Absolutely. It's important to note that pen testers often encounter situations where the claimed effectiveness of a WAF (Web Application Firewall) is overestimated. During pen tests, many organizations claim to have a WAF that should block the tester, but in reality, it often doesn't. WAFs can be helpful, but relying solely on them for security is not advisable.

Gaby: And just to add, having a WAF may make your payload look cooler, but it doesn't guarantee comprehensive security. It's crucial to understand that WAFs have limitations, and a holistic approach to security involves addressing vulnerabilities at multiple layers, including code scanning, pen testing, and Bug Bounty programs. Like Gil said, we're going to have to validate our user input.

The Importance Of Secure Code Training For Developers

​​Itzik: Absolutely, I agree with Gil. While we emphasize the importance of using code scanners, it's crucial to provide practical tips for developers to enhance their security skills. Security is a distinct profession, akin to development, architecture, or design. Delving into the security domain and exploring walkthroughs, capture the flag exercises, and vulnerability exploitation write-ups can significantly improve one's understanding.

Developers should focus on comprehending where vulnerabilities might exist in their code, thinking about how the code could be diverted from its intended control. An example of this is dealing with user-provided URLs. Developers might expect valid image URLs, but they should also consider scenarios where a user could provide an internal address, potentially leading to unintended consequences.

To illustrate, if an application fetches an image from a URL provided by a user, it might inadvertently access an internal network address. Developers need to filter and validate user input, anticipating potential malicious inputs. This involves crafting robust regular expressions, considering edge cases, and acknowledging the challenges in handling such scenarios.

In essence, developers should envision and understand how an attacker might manipulate inputs to alter the expected behavior of the code. By adopting this mindset during development and code reviews, developers can proactively identify and mitigate security risks, contributing to a more secure software development lifecycle.

Gaby: Absolutely. The OWASP Top 10 is an excellent resource for developers to understand and mitigate common security risks. It provides insights into real-world vulnerabilities and helps developers build a foundational understanding of security best practices. Following security guidelines and best practices, such as those outlined by OWASP, is crucial for writing secure and robust code.

Additionally, I would like to emphasize the importance of staying updated on security news, attending relevant conferences, and participating in security-focused communities. Engaging with the broader security community allows developers to learn from real-world experiences, discover emerging threats, and adopt evolving best practices.

It's not just about learning a specific framework, but rather cultivating a security mindset and staying informed about the evolving threat landscape. This proactive approach can contribute significantly to writing secure, high-quality code.

Naor: To add to that, another valuable resource for developers is the OWASP Application Security Verification Standard (ASVS). It provides a framework of security requirements that can be used to design, build, and test modern web applications and web services. By aligning with such standards, developers can enhance the security posture of their applications.

Robbe Van Roey: Absolutely, and it's important to recognize that frameworks and standards are not static; they evolve to address new challenges. Therefore, developers should make continuous learning a part of their routine, keeping up with the latest developments in application security.

Gaby: Well said. Continuous learning and staying informed about evolving security standards are key elements for developers aiming to enhance their security skills. It's about adopting a proactive mindset and actively participating in the broader security community.

Privacy Awareness An Essential Aspect For Secure Code Training

Gaby: Absolutely, and it's crucial to recognize that privacy is not only a legal requirement but also a fundamental human right. Understanding the ownership of data and ensuring its proper handling is essential. As developers, we need to build systems that respect user privacy and comply with regulations such as GDPR and CCPA.

Your example of debug logs highlights a critical aspect of privacy concerns within code. Logging sensitive user information in production logs poses significant risks, both from a privacy and security standpoint. Ensuring that logs are appropriately configured and do not capture sensitive data is vital to meeting privacy requirements.

It's essential to integrate privacy considerations into the development process, from designing the system architecture to implementing code. By adopting a privacy-by-design approach, developers can minimize the risk of unintentional data exposure and better protect user privacy.

Gil: Exactly, and I would add that privacy awareness should be an integral part of secure code developer training. Developers need to understand the impact of their code on user data and the legal implications of mishandling that data. Including privacy considerations in the development life cycle can help create a culture of privacy awareness within development teams.

Gaby: Absolutely, fostering a culture of privacy awareness within development teams is crucial. It requires ongoing education and collaboration between developers, legal teams, and privacy experts. By working together, organizations can ensure that privacy is not an afterthought but an integral part of the development process.

Gil: And it's not just about compliance; it's about respecting users' rights and building trust. Users are becoming increasingly aware of privacy issues, and organizations that prioritize and communicate their commitment to privacy can gain a competitive advantage.

Gaby: Well said. Prioritizing privacy is not only a legal obligation but also a way to build trust and maintain a positive relationship with users. It's about demonstrating a commitment to protecting user data and respecting their privacy rights throughout the entire lifecycle of the application.

What KPIs To Demonstrate Better Secure Coding?

Gaby: So, a few more questions about KPIs. How do we know if we're improving in terms of security? We've started implementing measures, such as following the OS 10 guidelines, conducting secure code training for developers, and performing scanning. Are we truly getting better, and can we measure this improvement? I believe we are making progress, but is it quantifiable? Please feel free to share your thoughts.

Gil: It's a great question, and it's challenging to answer. Quantifying improvement in security is not straightforward. Unlike functional aspects of software, where we can expect specific outcomes, security involves dealing with potential bugs and vulnerabilities. Ensuring the robustness and bug-free nature of the code is difficult. Even with a working code, we can't guarantee the absence of bugs. The key is to implement measures like code scanners, code reviews, and engaging pen testers. Professional hackers, like pen testers, can provide valuable insights. If they can't find issues, it's a positive sign. However, security is an ongoing challenge, and it's not just about code – configuration errors can also pose risks.

Gaby: Indeed, security isn't a zero-or-100 scenario. It's about making progress. By implementing tools, education, penetration tests, and bug bounty programs, you can demonstrate that progress. Security is interconnected with incident response, detection, and protection. It's a holistic approach that measures the entire security ecosystem. Does anyone else have thoughts on this?

Naor: Evaluating the overall trend within the organization is crucial. Tracking changes in sensitive data, vulnerabilities, and security measures helps assess progress. I suggest breaking it down into three phases: discovery, protection, and response. Discover existing vulnerabilities, set up protection against new threats, and establish a response plan. This approach helps gauge the organization's security baseline and trend over time.

Gaby: Thank you, Naor. Robbe, any additional insights?

Robbe Van Roey: Pursuing the goal of zero vulnerabilities is unrealistic. Even major companies like Google constantly find new bugs. The objective should be to minimize risks and get as close to zero as possible. Consistently working towards this goal is crucial. Mimicking security practices of successful companies can be beneficial. Striving for continuous improvement is more practical than aiming for perfection.

How To Implement Secure Code Training Effectively?

Gaby: Great insights. Two more questions. How do you introduce secure coding or secure code training without overwhelming the team? Some R&D teams resist, claiming developers already possess knowledge or that they're too valuable to take offline for extended training. What's a realistic approach, and how much time should be dedicated to practicing and fixing secure coding issues?

Gil: I was responsible for overseeing a team of over 700 engineers, often seen as the strict enforcer. In certain roles, like security and law, you end up taking a side. Engineers would sometimes react with a sense of more work when they saw me enter the room. As a good security manager, the goal is not to hinder progress or block roadmaps but rather to facilitate and enable. In my experience overseeing features and workflows in our augmented reality product, we only blocked one that posed a significant privacy risk – the idea of uploading camera feeds to our servers. It's essential to avoid self-sabotage in security efforts.

Gaby: Security should be approached as a fun and gamified activity, drawing inspiration from the popularity of CTFs (Capture The Flag) among hackers and pen testers. Introducing a gamified approach into the developer community can be effective, as seen in the positive response to CTFs. It's crucial to make learning enjoyable and challenging, encouraging developers to invest time willingly.

Itzik: To prevent overwhelming the team, introducing security basics and the OS 10 is important. A phased approach, tackling one topic per month and breaking it down into manageable pieces, allows developers to spend short periods regularly learning new concepts. Ongoing secure code training for developers is vital, emphasizing that becoming a secure developer is not a one-time task but a continuous process.

Gaby: Recognizing the difference between compliance-driven training and a genuine commitment to security is crucial. Those serious about security incorporate it into daily work throughout the year, rather than treating it as an annual task. Continuous, ingrained training programs are more effective in building and retaining secure development practices.

AI Tools For Writing Secure Code - Is It Better Than Human Coding?

Gil: Contrary to the belief that AI copilots can write secure code, human oversight and security measures are still essential. Security remains necessary even when utilizing automatic code generation capabilities.

Gaby: A thought-provoking question for the audience: Is code generated by a person or delivered by AI more secure? Exploring this aspect could provide valuable insights into the evolving landscape of application security trends.

Robbe Van Roey: In late December, a research paper emerged from researchers specifically examining developers using Copilot. While I don't have the exact numbers memorized, it revealed roughly a 70 percent increase in vulnerabilities among Copilot users. It's a concerning finding, as developers may get addicted to the convenience of using Copilot, quickly incorporating suggested code without closely scrutinizing its security implications.

Itzik: Developers often succumb to the temptation of hitting tab and accepting whatever Copilot provides to expedite their work. The desire to complete tasks quickly can lead to overlooking potential security issues in the generated code. Unfortunately, this efficiency also benefits attackers, posing a significant problem.

Prioritizing Vulnerabilities

Gaby: Regarding the previous question and customer concerns, prioritization is key. Understanding the critical issues within the broader organizational context is essential. It's not just about individual vulnerabilities but assessing how they might impact the organization as a whole. Factors like the service's exposure to the world and the practical use of vulnerable functions should influence prioritization. For instance, a high CVSS score might be less significant if the vulnerable function isn't actively utilized.

Naor: Additionally, considering the reachability of a vulnerability is crucial. Knowing whether a vulnerable function is genuinely in use helps determine the severity and priority of the issue. Our platform incorporates this concept of reachability, enabling organizations to focus on addressing vulnerabilities that directly impact their infrastructure.

Gil: At Piiano, we address data protection for developers by providing APIs for encrypting sensitive data. This includes personal information governed by privacy regulations. We aim to simplify encryption processes, eliminating the need for developers to be cryptography experts. Our infrastructure handles key rotation, key management, and key distribution challenges at scale. Securing customer data is a vital aspect of overall security, extending beyond code to encompass all aspects of data protection.

Gaby: I'd like to build on that, Gil, because it's an excellent point. Companies often struggle post-incident to locate their personally identifiable information (PII) data, investing significant resources and time in scanning databases and files using regular expressions. It involves a crazy amount of effort, dealing with false positives and additional databases. Going back to the basics, if you integrate privacy and security into your app's design – considering it during development, using the right technology, and incorporating user stories – you can save a tremendous amount of time and resources later. It's a low-cost approach that doesn't hinder speed; it just requires awareness. Scale your business, go to market, product, privacy, and security hand in hand. This webinar aims to raise awareness for security teams and founders to consider these aspects early on. Don't wait until Series A, B, or C; address it now to avoid slowing down later when scaling.

In Conclusion

Itzik: In wrapping up, I want to emphasize the importance of priorities in the overwhelming world of security. Achieving zero risk is unrealistic, but we strive to reduce risks and prioritize effectively. The discussion highlighted using scanners and methodologies from the inception, minimizing late-stage efforts and costly bug discoveries. Prioritization is the key, focusing on what truly matters and tracking information to ensure its integrity. This is the crucial takeaway from today's webinar.

Gaby: Thank you all for sharing your insights. Any final words, anyone? Itzik, perhaps a brief conclusion?

Itzik: In closing, we stressed the significance of setting priorities in the face of infinite security challenges. Utilizing scanners, adopting methodologies early on, and considering what truly matters help navigate the complexities. The key is to prioritize vulnerabilities and data, and to establish measures ensuring data integrity. This is the essential takeaway from our discussion. Thank you all for your valuable contributions.