Announcing our expanded Trusted Partner Program. Find out more

Data lakes, such as Oracle Big Data Service, represent an efficient and secure way to store all of your incoming data. Worldwide big data is projected to rise from 2.7 zettabytes to 175 zettabytes by 2025, and this means an exponentially growing number of ones and zeroes, all pouring in from an increasing number of data sources. Unlike data warehouses, which require structured and processed data, data lakes act as a single repository for raw data across numerous sources.

What do you get when you establish a single source of truth for all your data? Having all that data in one place creates a cascading effect of benefits, starting with simplifying IT infrastructure and processes and rippling outward to workflows with end users and analysts. Streamlined and efficient, a single data lake basket makes everything from analysis to reporting faster and easier.

There’s just one issue: all of your proverbial digital eggs are in one “data lake” basket.

For all of the benefits of consolidation, a data lake also comes with the inherent risk of a single point of failure. Of course, in today’s IT world, it’s rare for IT departments to set anything up with a true single point of failure—backups, redundancies, and other standard failsafe techniques tend to protect enterprise data from true catastrophic failure. This is doubly so when enterprise data lives in the cloud, such as with Oracle Cloud Infrastructure, as data entrusted in the cloud rather than locally has the added benefit of trusted vendors building their entire business around keeping your data safe.

Does that mean that your data lake comes protected from all threats out of the box? Not necessarily; as with any technology, a true assessment of security risks requires a 360-degree view of the situation. Before you jump into a data lake, consider the following six ways to secure your configuration and safeguard your data.

Establish Governance: A data lake is built for all data. As a repository for raw and unstructured data, it can ingest just about anything from any source. But that doesn’t necessarily mean that it should. The sources you select for your data lake should be vetted for how that data will be managed, processed, and consumed. The perils of a data swamp are very real, and avoiding them depends on the quality of several things: the sources, the data from the sources, and the rules for treating that data when it is ingested. By establishing governance, it’s possible to identify things such as ownership, security rules for sensitive data, data history, source history, and more.

Access: One of the biggest security risks involved with data lakes is related to data quality. Rather than a macro-scale problem such as an entire dataset coming from a single source, a risk can stem from individual files within the dataset, either during ingestion or after due to hacker infiltration. For example, malware can hide within a seemingly benign raw file, waiting to execute. Another possible vulnerability stems from user access—if sensitive data is not properly protected, it’s possible for unscrupulous users to access those records, possibly even modify them. These examples demonstrate the importance of establishing various levels of user access across the entire data lake. By creating strategic and strict rules for role-based access, it’s possible to minimize the risks to data, particularly sensitive data or raw data that has yet to be vetted and processed. In general, the widest access should be for data that has been confirmed to be clean, accurate, and ready for use, thus limiting the possibility of accessing a potentially damaging file or gaining inappropriate access to sensitive data.

Use Machine Learning:Some data lake platforms come with built-in machine learning (ML) capabilities. The use of ML can significantly minimize security risks by accelerating raw data processing and categorization, particularly if used in conjunction with a data cataloging tool. By implementing this level of automation, large amounts of data can be processed for general use while also identifying red flags in raw data for further security investigation.

Partitions and Hierarchy: When data gets ingested into a data lake, it’s important to store it in a proper partition. The general consensus is that data lakes require several standard zones to house data based on how trusted it is and how ready-to-use it is. These zones are:

  • Temporal: Where ephemeral data such as copies and streaming spools live prior to deletion.
  • Raw: Where raw data lives prior to processing. Data in this zone may also be further encrypted if it contains sensitive material.
  • Trusted: Where data that has been validated as trustworthy lives for easy access by data scientists, analysts, and other end users.
  • Refined: Where enriched and manipulated data lives, often as final outputs from tools.

Using zones like these creates a hierarchy that, when coupled with role-based access, can help minimize the possibility of the wrong people accessing potentially sensitive or malicious data. 

Data Lifecycle Management:Which data is constantly used by your organization? Which data hasn’t been touched in years? Data lifecycle management is the process of identifying and phasing out stale data. In a data lake environment, older stale data can be moved to a specific tier designed for efficient storage, ensuring that it is still available should it ever be needed but not taking up needed resources. A data lake powered by ML can even use automation to identify and process stale data to maximize overall efficiency. While this may not touch directly on security concerns, an efficient and well managed data lake allows it to function like a well-oiled machine rather than collapsing under the weight of its own data.

Data Encryption:The idea of encryption being vital to data security is nothing new, and most data lake platforms come with their own methodology for data encryption. How your organization executes, of course, is critical. Regardless of which platform you use or what you decide between on premises vs, cloud, a sound data encryption strategy that works with your existing infrastructure is absolutely vital to protecting all of your data whether in motion or at rest—in particular, your sensitive data.

Create Your Secure Data Lake

What’s the best way to create a secure data lake? With Oracle’s family of products, a powerful data lake is just steps away. Built upon the foundation of Oracle Cloud Infrastructure, Oracle Big Data Service delivers cutting-edge data lake capabilities while integrating into premiere analytics tools and one-touch Hadoop security functions. Learn more about Oracle Big Data Service to see how easy it is to deploy a powerful cloud-based data lake in your organization—and don’t forget to subscribe to the Oracle Big Data blog to get the latest posts sent to your inbox.

Via: https://blogs.oracle.com/

Leave a Reply

Your email address will not be published. Required fields are marked *