Reducing the constraints of Adopting an In-Memory Digital Integration Hubs

DAVID BRIMLEY on Digital Integration

“Digital integration hubs give back-end data sources a central access point for multiple applications to call on uniformly in complex systems. By moving towards a system that doesn’t need to issue various API callouts before processing data, enterprises save valuable time.” says David Brimley, CPO of Hazelcast, in an exclusive interview with ET.

ET Bureau- How will the latest data grid help businesses to transition smoothly to digital initiatives?

David Brimley- Our latest version of Hazelcast IMDG, our in-memory data grid, adds several features to improve performance, scalability, and resilience. These are all important to address for several reasons. First and foremost, enterprises today are tasked with managing more data than ever before. Seagate’s recent Rethink Data survey showed that in the next two years through 2022, enterprises will see a 42.2% annual increase in their volume of generated data. So, enterprises need to manage increasing amounts of data and satisfy customers who expect higher and higher performance levels.

Simultaneously, tech stacks to manage all this data are getting more complicated, and enterprises turn to distributed systems to serve widespread customer bases. Digital integration hubs are emerging to confront the challenges of distributed systems and manage various data sources by providing single point of access and standardized APIs.

IMDG 4.1, which just rolled out in beta, supports enterprises using digital integration hubs in a couple of different ways. We’ve added SQL support, which lets developers work with us in their hubs in a well-known language, improved cluster rebalancing to reduce downtime, and further optimized our platform for Intel Optane DC Persistent Memory Modules. We have made Hazelcast more flexible, cost-effective, and easier to use, while further enhancing the results we deliver.

Read Also: Nurturing Big Data with the Power of Artificial Intelligence

ET Bureau- What will be the benefits of adopting a new architecture with a standardized API and a single access point?

David Brimley- Digital integration hubs are valuable for enterprises for several reasons. Moving legacy systems towards new data channels successfully is challenging because companies need to fit many different pieces together while ensuring the architecture they’re using is manageable. That’s not an easy task.

Digital integration hubs give back-end data sources a central access point for multiple applications to call on uniformly in complex systems. By moving towards a system that doesn’t need to issue various API callouts before processing data, enterprises save valuable time. Excessive callouts create unnecessary latency, and even delays of several microseconds can frustrate end customers who require immediate responsiveness, real-time analytics, and more.

ET Bureau- What, according to you, are the best scenarios of increased SQL support?

David Brimley- Developers are widely familiar with SQL, and adding support increases the usability of Hazelcast.  Hazelcast IMDG supported querying and aggregating over maps using an SQL-like syntax through its existing query engine in previous releases. Still, there was a strong voice from the community that it should have additional features, and we wanted to be responsive to that demand.

The SQL support we’ve added in IMDG 4.1, which also supports APIs for languages including Java and Python, allows us to retrieve data using another well-known API, making digital integration hubs easier for our users to manage. This also simplifies implementation and deployment.

Many businesses are managing tech stacks that are becoming increasingly complex. By adding SQL support, we are making it easier for companies to work with us on top of legacy systems and with languages they understand.

Read Also: Artificial Intelligence Market Revenues to Touch $156 Billion This Year

We’re committed to adding more support for SQL, too. Our latest version’s SQL support is intended for “select” queries on IMDG maps already populated with data. In future IMDG versions, we’ll be introducing additional capabilities like joins, aggregations, and sorting.

ET Bureau- Will the new features have a sizeable impact on the performance and cost metrics?

David Brimley- We do expect IMDG 4.1 users will see lower deployment costs and implementation costs. Because we’ve added SQL support, which many developers are familiar with, we’re reducing the learning curve needed to implement IMDG, which means developers can begin to realize its full business value faster.

Further, IMDG 4.1 goes farther to protect users from downtime and delays that can be extremely harmful in digital integration hubs responsible for managing entire distributed ecosystems. Downtime can result in enterprises losing revenue or even customers if they’re dissatisfied with services.

IMDG 4.1 addresses cluster rebalancing to protect against downtime in the event of hardware failure. If hardware crashes, clusters need to restore lost data by promoting backups, but this requires rebalancing across clusters. IMDG 4.1 recovers faster in an outage event and reduces the time spent in a less-than-optimal state by order of magnitude.

To give an example, let’s consider a network that has disconnected one node in a 10-node cluster. This kind of cluster might be responsible for storing terabytes of data, a truly massive quantity, but not uncommon for enterprises to generate. IMDG 4.1 can now complete rebalancing in two minutes instead of previously when it would have taken a half-hour.

Read Also: How to solve the challenges in adopting Artificial Intelligence

We’ve also taken steps to become even more cost-effective and efficient when installed in a system with Intel Optane DC Persistent Memory Modules. In IMDG 4.1, we enhanced the ability to use all modules installed in a system, enabling enterprises to build genuinely massive terabyte-sized in-memory deployments. We’ve also added performance tuning options that can increase the throughput of a system using these modules by 50% to gain near RAM-like speeds, for approximately half the cost in some use cases.

David is Chief Product Officer at Hazelcast, where he helps expand the portfolio from its core in-memory data grid offering to address new use cases, such as stream processing, cloud managed services, and digital integration