Although the goal of CVD policies is clear and statistics indicate a positive development of these policies and their users, current policies have some problems that should be discussed in order to understand the possible problems of these policies in preventing crime on both the victim and the offender side. Taking a traditional deterrence approach, problems with the reporting process may influence a person’s decision to following CVD guidelines.
The organization’s response
Organizations should adopt a CVD policy because they want to increase their security, though this also means that the organization should be able to respond to a reported vulnerability. In addition, organizations without a CVD policy may also receive a vulnerability report. When there is no CVD policy, it is not clear to disclosers how the organization will respond. The expected reaction of such an organization may influence the behavior of a possible discloser: these organizations could (1) respond gratefully and patch the vulnerability as soon as possible, (2) ignore it, (3) deny it, or (4) report to the police. An organization that does not have a CVD policy may, for example, not know how to respond or not understand the vulnerability and could therefore decide to ignore it or deny the vulnerability’s existence. They may even misinterpret the intentions of the reporter and report it to the police as a crime.
Even organizations that do have a CVD policy might not have the capacity to handle big vulnerabilities, which may delay the patching process. The longer a vulnerability has not been patched, the higher the risk of rediscovery or that the discloser decides to make it public anyway (Herr et al. 2017). Most CVD policies state how much time they would take before fixing a vulnerability, but that could easily be 6 months. In a response to that, new companies now arise who handle coordinated vulnerability disclosure for small companies (Huang et al. 2016).
Moreover, the goal of having a CVD policy is to keep vulnerabilities private until they are patched. This means, however, that the outside world including the discloser cannot see that an organization is working on a patch. Therefore, it is key that an organization keeps communicating with the discloser about the patching process, which is also what the majority of the researchers in the NTIA (2016) report expect. Nevertheless only 58% received a notification when the vulnerability had been patched. Depending on a person’s motive, this could influence the discloser’s behavior.
Unclear or unjust rules
In order for a CVD policy to work, both the company and the discloser need to stick to the rules in the policy. An absence of clearly identified rules may lead to a lack of disclosures, as would guidelines that are too strict. For example, deadlines in the policy could force a company to publicly disclose a vulnerability that has not yet been patched, as they do not know how the discloser would respond if they would not.
For the discloser, there is no guarantee that he or she will not be prosecuted under current CVD guidelines (NTIA 2016). An organization without a policy may report it to the police immediately, as could organizations with clear policies if they believe the discloser did not abide by their rules. In The Netherlands, the public prosecutor could also decide to prosecute if they believe a crime has been committed. For most disclosures some form of system-trespassing is necessary, as it is not possible to ask for permission from the system owner. For example, in the survey from the NTIA (2016), researchers indicated that they generally find vulnerabilities in their daily activities, without actively looking for them. In that sense, requiring asking for permission partly defeats the purpose of having a CVD policy.
For some organizations, it is publicly known how they generally handle vulnerability disclosures. First, bug bounty programs are publicly known and some organizations are very open about their CVD policies and they actively encourage the hacker community to test their systems. However, there is a big difference between open and closed communities, even in the same sector. For example, while the Linux community actively encourages people to find vulnerabilities, Microsoft historically tended to prosecute people who disclose vulnerabilities (e.g., Steinmetz 2016; Taylor 1999). Similarly, when looking at the hacker subculture, there is a general tendency to share vulnerabilities within the subculture, but not with others like law enforcement or large commercial companies that are not open source (Taylor 1999). These unclear and sometimes unwritten rules result in a situation in which one person will be prosecuted for the same behavior for which someone else would get an acknowledgement or even a bounty. This could result in the opinion that the rules are not fair or even unjust, which may influence if and how someone discloses a vulnerability.
When the vulnerability has been patched, or when the deadline as described in the CVD policy has expired, the discloser and the IT-system’s owner can decide together to disclose the vulnerability to the public. There are several reasons to do so. First, it could be a way to provide the discloser with some acknowledgement for his or her work and abilities to find this vulnerability. 53% of the researchers in the NTIA (2016) report stated that they expect to get some form of acknowledgement, although it should be said that a minority (14%) prefers to remain anonymous.
Another reason to disclose these vulnerabilities is to inform the public about the vulnerability and what should be done to prevent exploitation of the vulnerability. It could be the case that other IT-systems have similar vulnerabilities or patching the vulnerability in software requires an update from users (Department of Justice 2017). The amount of information that a company is willing to share about the vulnerability may, however, be limited. The discovery of the vulnerability may be embarrassing for the company, affect their finances, or reveal too much of the underlying operation. This limits the usability of the disclosed information and may influence a person’s decision to report a vulnerability to a party that has not shown openness about vulnerabilities.
In a similar fashion, some recent incidents have shown that governments are sitting on vulnerabilities in order to engage in offensive attacks (Ablon and Bogart 2017). They may have found these vulnerabilities themselves, but it is also very likely that they have bought these vulnerabilities at underground markets for exploits (Fung 2013; Healey 2016). They do not disclose these vulnerabilities, not even to the system owners, which has caused some major damages when these vulnerabilities ended up in the wrong hands. For example, the Wannacry ransomware used the EternalBlue vulnerability, which is said to be discovered by the National Security Agency (NSA) several years ago (Nakashima and Timberg 2017; Titcomb 2017), and was not disclosed until the ShadowBrokers published it. Microsoft patched the vulnerability, but 3 months later many systems were still vulnerable which enabled the large and worldwide damage of the Wannacry ransomware (Newman 2017). This is likely one of the reasons that some parts of the hacker culture have a tendency to share vulnerabilities within the community, but not with others and especially not with governments (Taylor 1999). Additionally, by buying these vulnerabilities at underground markets, governments may send the message that they are not supporting CVD, as they are rewarding criminals who sell their exploits.
Knowledge about CVD among possible offenders
Several of the problems discussed above may influence a person’s decision about how to handle a vulnerability. To be able to make a decision a person first needs to know about the possibility to report a vulnerability through CVD, and then must know the policy’s rules. From the NTIA (2016) report, it is clear that most people who could be regarded as security researchers know about these policies. As also acknowledged by the NTIA it may very well be the case that their respondents have an interest in CVD or at least already know about it. It is unknown to what extent this can be said for the general population. For the purposes of this work, we will assume that a person with the skills necessary to identify vulnerabilities in the wild knows about the possibility to use CVD.