Application Security Assessment 101
by Wictor Olsson 2023-12-08
As a security consultant within multiple disciplines, conducting application security assessments is a big part of what I and my colleagues do.
When building and maintaining a product or a service, there are many parts with the potential to introduce risk. Components, configuration, third-party libraries; even your own input validation and business logic might have security flaws. Which if abused by a clever attacker could cause both reputable damage and/or financial loss.
It is becoming standard to perform security analysis and testing of your applications as part of the development lifecycle. To proactively and continously identify and handle security issues in your implementation and its dependencies is considered good practice. Apart from adhering to good practice and protecting the users of your software, the incentive to regularly perform such activities can vary; it could be customer or industry requirements, standards, certifications or government regulations (PCI-DSS, NIST SP 800-53, UNECE R155, NIS2, European Cyber Resilience Act).
Scoping and planning
A risk- or exposure-based approach is often taken when scoping and prioritizing an assessment. Publicly accessible APIs and functionalities are typically of interest, mostly focusing on the "homebrewed", custom bits of the code. But, it is also important to consider internal or external integration points (and functions) as these are common areas for creative adaptations, which could be directly or indirectly abused by an attacker. Neglected security in your development lifecycle could result in data being insecurely stored or transmitted/leaked, malicious input being interpreted as code and executed.
Common areas to test and review are:
- Authentication and authorization methods: could your MFA be bypassed?
- Input validation and processing: do you filter junk input sent to that database/parser?
- Secret storage methods and securing data: are your customers' PII stored correctly?
- Cryptography in terms of implementation and settings: are you using deprecated and weak algorithms?
- Business logic validation: can your payment flow be subverted and abused?
- Transport security settings: is it possible to intercept client traffic?
- Service configuration: is it secure or just running defaults?
- Framework implementation: does it adhere to best practices?
- Third-party libraries: are they up-to-date, patched?
Methodology and tools of the trade
The methods used to identify these issues vary depending on the target. Most of the time a hybrid methodology is used, where both static and dynamic analysis and testing is performed to gain as much insight into the target as possible. The attacker gains the upper hand by getting to know the inner workings of the system they are attacking (sometimes better than its makers). We similarly aim to gain as much information as possible during the time available to us.
Often this is done through code and configuration review and dynamic testing of the application. This could be done either with well known tools or require customization or development of new tooling. Some targets are different and have black-box components, this might require some reverse engineering or network traffic interception. Information from automated techniques and tools feeds into manual analysis and validation to gain insight.
These methods and tools can in many cases be (and should be) integrated into the development and testing process to identify common flaws as early as possible.
Examples of tooling that could be used:
- Automated static code analysis tools - Semgrep.
- Tools for analyzing code complexity - Lizard.
- Disassemblers and decompilers - Ghidra.
- Network traffic proxies and analyzers - Wireshark.
- Debugging and instrumentation frameworks - x64dbg, Frida.
- Applications for compositional analysis - Trivy.
- Tools for vulnerability and network scanning - Burp Suite, Nmap.
- Specialized analyzers and fuzzers for robustness testing - FFUF, Radamsa.
Independent of the application type, it is important to build up an initial familiarity and deduce its attributes:
- What does it do?
- How does it communicate and with what?
- What components does it use?
- Which parts are custom and which are commodity?
- Where is the most sensitive functionality?
- Where does complexity live?
With recurring tests, knowledge is carried over and becomes an advantage over external attackers. With white-box testing, which is mostly the case, we get an even better understanding of the test target which improves our efficiency.
Endgame
Answering these questions will formulate the game-plan for further analysis and testing, to uncover vulnerabilities in the target software. The end result is information and data indicating potential issues, which require manual review and verification, to be put into context to assess the risk, find the root cause, and formulate a fix or recommendations for a suitable solution.
During our engagements, we acquire a deeper understanding of the customer's development process, including its strengths and weaknesses. This insight can emerge early on through discussions with developers about their current practices, or later through the technical findings we uncover. As a result, we often include a section in our reports that offers practical tips and strategies for ongoing use in development and testing.
For more information regarding common application vulnerabilities, see:
Finally, check out our application security overview and our range of services which include penetration testing, secure code review, secure design and advisory. Give us a call or send an email to find out more or if you want to provide feedback.