Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

CTA

开始

云

是否已准备就绪?

下载 sandbox

我们能为您做什么?

关闭关闭按钮
CTA

下一代 Hadoop
安全性、治理和运营

云 白皮书:更好的 Apache Hadoop 安全和数据治理

下载

集成 Apache Atlas 和 Apache Ranger 能够实现基于分类的安全策略

As organizations pursue Hadoop initiatives to capture new opportunities for data-driven insights, data governance and data security requirements can pose a key challenge. Hortonworks created an Apache Hadoop Data Governance Initiative to address the need for open source governance solution to manage data classification, data lineage, security and data lifecycle management.


有效的数据管理和控制不能是被动或者简单取证。基于一致数据管理分类的集中式访问控制是动态安全的基础,并且是 Open Enterprise Hadoop 的核心要求。为实现这一目标,Hortonworks 发布了包含 Apache Atlas 和 Apache Ranger 的新公开预览功能,提供了数据分类与安全策略实施。


Apache Atlas, created as part of the Hadoop data governance initiative, empowers organizations to apply consistent data classification across the data ecosystem. Apache Ranger provides centralized security administration for Hadoop. By integrating Atlas with Ranger, Hortonworks empowers enterprises to institute dynamic access policies at run time that proactively prevents violations from occurring.


The Atlas/ Ranger integration represents a paradigm shift for big data governance and data security in Apache Hadoop. By integrating Atlas with Ranger enterprises can now implement dynamic classification-based security policies, in addition to role-based security. Ranger’s centralized platform empowers data administrators to define security policy based on Atlas metadata tags or attributes and apply this policy in real-time to the entire hierarchy of data assets including databases, tables and columns.


Hortonworks empowers data managers to ensure the transparency, reproducibility, auditability and consistency of the Data Lake and the assets it contains. Apache Atlas now provides the ability to visualize cross-component lineage, delivering a complete view of data movement across a number of analytic engines such as Apache Storm, Kafka, Falcon and Hive. Hadoop operations, stewards, operations, and compliance personnel now have the ability to visualize a data set’s lineage and then drill down into operational, security and provenance-related details. As this tracking is done at the platform level, any application that uses multiple engines will be natively tracked. This allows for extended visibility beyond a single application view.