Vulnerability Management Project Best Practices

Online Lottery software development

Vulnerability Management Project Best Practices

Vulnerability Management Project Best Practices



If it is based on mature basic goals that meet the information needs of all stakeholders, if its output can be tied to the corporate goals, if the overall risk of the enterprise can be reduced, then the enterprise's vulnerability management project can realize its full potential.

Such vulnerability management techniques are capable of detecting risks but require a foundation of people and processes to ensure project success.

The vulnerability management project has four phases:

1. Identify asset criticality, asset owner, scan frequency, and establish a repair timeline;

2. Discover assets on the network and create a list;

3. Identify vulnerabilities in discovered assets;

4. Report and fix the identified vulnerabilities.

The first phase focuses on the establishment of measurable, repeatable processes. The second phase carries out the process with regard to the four priorities of the first phase, with a focus on continuous improvement. Below we analyze these stages in detail.

Phase 1: The vulnerability scanning process

The first step in this phase is to determine the criticality of assets in the enterprise.

To build an effective risk management project, you must first determine which assets in your organization need protection. This applies to computing systems, storage devices, networks, data types, and third-party systems on enterprise networks. Assets should be classified and ranked according to their true inherent risks to the business.

The asset's inherent risk score should consider many aspects, such as physical or logical connections to higher-level assets, user access, and system availability.

For example, assets in the quarantine that have logical access to the account database are more critical than assets in the lab. Assets in a production environment are more critical than test environments. Internet routable Web servers are more critical than internal file servers.

However, even for less critical assets, their repairs cannot be ignored. Attackers can take advantage of these often overlooked assets to gain access, then cruise the network and invade multiple systems until they start storing sensitive data. Repair work should always be based on overall risk relevance.

The second step is to identify the owner of each system.

The system owner is solely responsible for the asset, its associated risks and its responsibilities after being hacked. This step is critical to the success of a vulnerability management project because it drives accountability and remediation within the enterprise.

No one pushes the mitigation of this risk if no one takes the risk.

The third step is to determine the scanning frequency.

The Internet Security Center recommends in its Top 20 Critical Security Control that companies should “automate vulnerability scanning on all systems on their network once a week or more frequently”. Network security vendor Tripwire publishes a vulnerability feature (ASPL) once a week.

Frequent scanning enables vulnerabilities to enable asset owners to track the progress of repair work, discover new risks, and reorder bug fixes based on newly collected intelligence.

At the beginning of the vulnerability, the vulnerability may be scored lower because there are no known exploits. Once a vulnerability is available for a while, an automated exploit kit can occur and increase the risk of the vulnerability. Fragile systems can be vulnerable to one or a set of vulnerabilities due to newly installed software or patch rollbacks.

Many factors can affect the risk profile of asset changes. Frequent scanning ensures that asset owners keep up with the latest information. At the bottom line, the vulnerability scanning frequency should be no less than once a month.

The fourth step in establishing this process is to establish and record the timeline and threshold for repair.

Vulnerabilities that can be exploited in an automated manner that give attackers privileged control should be fixed immediately. Provides more difficult to use privilege control, or currently only theoretically exploitable vulnerabilities, should be fixed within 30 days. Vulnerabilities with lower risk should be fixed within 90 days.

If the system owner is unable to fix the vulnerability within the appropriate time frame, the repair exception process should be applied.

The process should document the system owner's understanding and acceptance of the risk and set an acceptable action plan to fix the vulnerability before a certain date. Validity is an essential element of vulnerability anomalies.

The second stage: asset discovery and inventory creation

Asset discovery and inventory creation rank first and second in key security controls. This is the foundation of any security project—whether it's information security or other security—because defenders can't protect things they don't know.

The primary critical security control is to have a list of all authorized and unauthorized devices on the network. A secondary critical security control is a list of authorized and unlicensed software installed on assets in a corporate network.

These two key security controls complement each other because the attacker always tries to discover an easily usable system to enter the corporate network. Once inside the network, an attacker can use his control over the system to attack other systems and further penetrate the network.

Ensuring that the information security team knows what's on the network allows them to better protect these systems and provide guidance to owners of those systems to reduce the risks to these assets.

It is very common for users to deploy a system without notifying the information security team, from the test server to the wireless router set up at the employee's desk for convenience. In the absence of proper asset discovery and network access control, such devices can open the door to an internal network for attackers.

Perform asset discovery within a defined scope and identify which applications are running on these discovered assets before performing a vulnerability scan.

Phase III: Vulnerability Detection

Once all the assets on the network are discovered, the next step is to identify the vulnerability risk status of each asset.

A vulnerability can be discovered through an unauthenticated or verified scan, or an agent can be deployed to determine the status of a vulnerability. Attackers typically view system status in an unauthenticated scan. Therefore, a scan without credentials will provide a view of the system vulnerability status seen by the original attacker.

Unauthenticated scanning helps identify some very high-risk vulnerabilities that can be remotely detected and exploited by an attacker to gain deep access to the system. However, there are always vulnerabilities that users can use to download email attachments or click on malicious links, which are not detected with unauthenticated scans.

A more comprehensive vulnerability scan recommendation is a validated scan or deployment agent. This method can improve the accuracy of enterprise vulnerability risk detection. Validated scans are specific to the operating system and installed applications detected during the asset discovery and inventory creation phase, identifying which vulnerabilities exist on them.

Vulnerabilities in locally installed applications can only be detected by this method. Validated vulnerability scans also identify vulnerabilities that an attacker might see from an external unverified vulnerability scan.

The vulnerability status results provided by many vulnerability scanners only detect the patch level or application version. Vulnerability scanning tools should provide a more detailed analysis because their vulnerability signatures can identify many factors. For example, the removal of vulnerable libraries, registry keys, and (but not limited to) application repairs need to restart the system.

Stage 4: Reporting and Repair

After the vulnerability scan is completed, each vulnerability is scored by an exponential algorithm based on three factors:

1. The technology required to exploit the vulnerability;

2. Permissions available for successful exploits;

3. The age of the vulnerability.

The easier it is to exploit the vulnerability, the higher the permissions that can be obtained, and the higher the risk score. In addition, the risk score increases as the age of the vulnerability increases. The primary metric to be considered is the overall baseline average risk score for the firm.

The most mature companies have even lower average risk scores and focus on addressing every vulnerability with a risk score higher than 1,000. The next indicator to focus on is the average risk score of the asset owner.

Asset ownership is identified in the first phase, so each owner should be able to see the baseline risk score for their assets. Similar to the overall corporate goal, each owner should start with an average risk score of 10% to 25% per year, until the score is below the acceptable threshold for the business.

System owners should be able to see each other's scores and compare them to know where they are. System owners with the lowest risk scores should be rewarded.

To facilitate the fix, the system owner needs empirical vulnerability data to describe which vulnerabilities should be fixed and instructions on how to perform the fix. The report should show the most vulnerable hosts, the highest risk scores, and/or reports for specific high-risk applications. In this way, the system owner can sort the repair work reasonably and prioritize those vulnerabilities that minimize the risk of the enterprise.

As new vulnerability scans are performed, new metrics can be compared to previously scanned metrics to show risk trend analysis and fix progress.

Some of the metrics that can be used to track repairs are as follows:

What is the owner’s average vulnerability score and overall average vulnerability score per asset?

How long does it take for the owner to fix the infrastructure vulnerabilities and the average time to repair the average?

How long does it take for an owner to fix an application's average time-consuming and average repair time?

What is the proportion of assets that have not received vulnerability scans recently?

How many remote exploits are available on the system that provide privileged access?

In the initial stage of project construction, it is not uncommon for companies to have a high average vulnerability score and a long repair cycle. The key is to progress month by month, quarter by quarter, year after year.

As the team becomes more familiar with the process, the risk of attackers is increasingly understood, and enterprise risk scoring and bug fix time should be gradually reduced.

Vulnerability and risk management is an ongoing process. The most successful projects are continuously adaptive and consistent with the risk reduction goals of enterprise network security projects. The process should be reviewed frequently and employees should keep up with the latest threats and trends in information security.

Ensuring the continued development of people, processes and technology will ensure the success of enterprise vulnerability and risk management projects.

keasysoft Online Lottery software development adheres to the service concept of making purchases more and more convenient and making services more and more perfect. After 8 years of continuous improvement of product functions and improvement of product advantages,keasysoft Lottery development we have been committed to the "industry digital experts" brand concept, focusing on Development and research of lottery systems. The unique advanced product development model ensures the success of each product of the customer from the three aspects of early demand collection, parallel development and after-sales service. Adhering to the business mission of "creating value for customers", based in Hangzhou, serving the world.