Beyond Technology: Managing the Blind Spots of Database Security

By Mark Trinidad

The sheer quantity of digital information available today makes storing, organizing and analyzing data extremely difficult. The issue is compounded by the fact that databases and big data stores are a prime target for hackers due to the amount of sensitive information residing within, including customer information, intellectual property and proprietary secrets.

This data, stored in traditional, relational databases or newer, big data platforms can be mission critical and sensitive. As such, it requires tailored defenses not afforded by regular network and application security controls. While security measures that are tailored directly to a business’ needs are a step in the right direction, organizations must also recognize that technology alone isn’t enough to reduce the risk of compromise.

A truly effective database security program must incorporate people, process, and policy into a holistic approach customized for every company’s needs, and then be reinforced with robust security technologies. The technology is not a cure-all on its own, and the human element plays a major role in creating an effective, security program.  Only when done thoroughly and effectively does this strategy provide a solid security framework for a business.

Continuous Monitoring

Continuous assessment is the first step to creating an effective database security plan. You need to know where your data resides in order for you to protect it.  By making sure you have a constant pulse on the databases in your network, you can then work to identify, classify and prioritize the systems that might require attention. Organizations should be regularly reviewing their environments accessible assets, user access levels and security feature usage, as well as monitoring database stores to monitor for new, rogue or missing installations and objections. An accurate inventory of database instances in your environment is a critical-path item in establishing a holistic and effective database security program.

Inventory, Test, Eliminate and Enforce

Once you know where your data resides, you can work to monitor it, and protect it from intruders.  It only takes one unpatched, rogue database on the network to expose your entire datacenter to unchecked, malicious activity. Once a flaw is identified, an organization can quickly mitigate it to secure its ecosystem. During the assessment process, organizations should also regularly revisit how users are interacting with network data, and what level of privileged access users have. Knowing the activity taken by users, and revisiting who in the organization has access to sensitive information is an important piece of strengthening your network’s security.

In all, constant assessment provides real protection for businesses by keeping decision makers aware of what is happening in a database, and helping define the security standards and compliance policies needed today.

Continuous Protection

Once all baselines—asset and human behavior alike—are created, businesses can begin effectively monitoring for anomalies to enter the database. This can be ensured by implementing risk mitigation and compensating controls, establishing acceptable user and user activity policies, and auditing privileged user behavior in real-time. Policy-based activity monitoring can then be set in place to alert whenever a rule is violated.

In the testing phase, minor bugs should have already been addressed and human behaviors should have been benchmarked, meaning that deviations from the norm should generate suspicion. From there, teams can contain a suspicious threat, investigate, and take action.

Effective database security should be accurate and intuitive, scalable for a distributed architecture, customizable in its policies, comprehensive in its reports and helpful in the prioritization of issues to be addressed. It should also be layered with existing security efforts and solutions in order to provide a holistic approach to security.

If done correctly, organizations should be able to simply and intuitively navigate their security measures to address all security risks in a timely and efficient manner. Having a complete picture of their data stores—who’s interacting with them and how—allows organizations to shorten their incident response time and reduce their general exposure in case of a compromise.

Ultimately, database protection takes more than static defense measures or appliances. It requires a holistic approach that hinges on “a single truth” generated by the process outlined above. Only then will databases stay secure in the currently widely distributed, cloud-based networking paradigm. According to Trustwave’s 2015 Global Security Report, the median length of time that it took to detect a breach was eighty-six days. Organizations cannot afford to wait for a breach, or to respond once a compromise has occurred. They need to take a proactive approach, and ensure they are continuously assessing and monitoring their ecosystem to stay ahead of impending threats.

About the Author

Mark Trinidad is a Senior Product Manager for Trustwave. His responsibilities include solutions for database security, vulnerability management (network and application), and security testing. He works with organizations around the world to guide them on how process, people, and technology need to align to solve problems and mitigate risk.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles