“Data virtualization allows queries to be sent to distributed datasets. Secure execution environments, sandboxed environments where algorithms are strictly controlled as to which data they can access and where they can send it to, can provide only the desired analysis results without exposing raw data.” – Says David Maher, CTO at Intertrust, in an exclusive interview with Enterprise Talk.
ET Bureau: The rise of big data and IoT means that the flow of data and information is increasingly between multiple parties and across disparate systems and platforms. How has this impeded the enterprise, and why are current solutions to the problem not sufficient?
David Maher: New revenue streams and business models are made possible by the increased use of data only when multiple stakeholders can both contribute data and collaborate in its use. Yet, enterprises traditionally have held their data in data silos, often distributed throughout the organization.
Current solutions to data collaboration generally fall into two categories, copying/moving the data to a new location for collaboration or allowing this data to be accessed via APIs. Moving data is associated with numerous security and governance risks as well as high costs.
In both cases, if the data is transferred to another organization, whether that is a partner or a third party, often the only means for an enterprise to control and audit its use is by legal contracts that can be difficult to enforce.
ET Bureau: Enterprise data are usually held across different configurations such as private cloud, data center, or public cloud. How can companies securely manage these diverse systems and datasets where data cannot be easily accessed?
David Maher: Several technologies can be combined to make this happen. Data virtualization allows queries to be sent to distributed datasets. Secure execution environments, sandboxed environments where algorithms are strictly controlled as to which data they can access and where they can send it to, can provide only the desired analysis results without exposing raw data.
Access to these data-derived assets for partners can be governed by flexible policy-based data management technology that uses fine-grained IAM methods to make sure only authorized users and systems can access the results they are entitled to. The end result is a tightly controlled yet flexible system where raw data can remain in the original location. Partners have appropriate access to the data-derived assets they need without the need to expose the original raw data.
ET Bureau: How can data interoperability and data exchange layer provide a secure collaboration environment, and how is this different from data warehouses and data lakes?
David Maher: The system described above provides data interoperability since data virtualization technology allows for queries to be addressed and results reported from datasets held in different formats and repositories. Secure execution environments combined with data governance and action logging technologies creates a data exchange layer so that data-derived assets can be exchanged securely with partners.
Together, they provide a secure collaboration environment since selected partners can be given tightly controlled access to the data-derived assets they need while protecting the raw data.
One major area where this differs from data warehouses and data lakes is that the raw data does not required to be moved from its original location to the data warehouse or data lake, thus avoiding the data security risks and costs associated with data ingestion and export and synchronization.
ET Bureau: Data security and governance is one of the most important aspects of data management. How has the experience in data rights management and data security influenced the creation of the Intertrust Platform?
David Maher: Data rights management (DRM), as it is practiced in protecting entertainment data distributed by content distribution systems, secures and controls access to data as it travels among multiple systems. These include content origination servers, packaging servers, content distribution networks, and disparate end-user playback devices.
It also has to enforce the rights and interests of loosely-coupled organizations such as content rights holders, distribution service operators, device manufacturers, and the end-user. Data access rights in particular need to be managed flexibly since there is a wide range of access rights that can depend on such things as end-user subscription rights and policies set by content rights owners and distributions service operators.
Broadly distributed data management systems, allowing many parties to govern access to data from disparate sources, use security and trust management capabilities that Intertrust has developed and practiced over the past 30 years. We have adapted their use to address numerous use cases, ranging from protection and governance of human genomic information to secure management of information for, and control of, networks of distributed energy resources.
ET Bureau: In terms of building value-added services to monetize their data, what risks and challenges deter enterprises from doing this?
David Maher: The main challenge faced by enterprises in monetizing their existing data is that they most often have only a part of a data set that is commercially useful. They can sell that data to some other enterprise. Still, confidentiality, privacy, and regulatory issues may make that infeasible.
Even if those issues are addressed, the bulk of the value of the data will be captured by enterprises that have the knowledge and capabilities to combine it with data from other sources, transforming it from raw data to rich information and in-depth contextual knowledge.
However, a new type of commercial venture is emerging called the data co-op, where multiple enterprises can use a distributed data management system managed by a trusted third party that expertly addresses these issues and provides a market interface for the optimal extraction of value from a richer set of information in a commercially useful context.
With over 30 years of experience in secure computing, David Maher is CTO at Intertrust. His responsibilities include new technologies for trusted distributed data management systems. Before joining Intertrust, Maher was chief scientist for AT&T Secure Communications Systems. He was also chief architect for AT&T’s products used by the White House and DoD for top-secret communications.