By ET Bureau - April 07, 2021 5 Mins Read
MemVerge™, the pioneers of Big Memory software, today announced the release of Memory Machine software version 1.2. The software delivers Big Memory performance and capacity leveraging up to 40 cores in 3rd Gen Intel Xeon Scalable processors (code named Ice Lake) and up to 6TB of capacity per socket with Intel Optane persistent memory 200 series.
The company also announced its membership in the CXL™ Consortium, and five Big Memory Labs at Arrow, Intel, MemVerge, Penguin Computing, and WWT that are now equipped and available for Big Memory demonstrations, proof-of-concept testing, and software integration.
“Memory Machine v1.2 is designed to allow application vendors and end-users to take full advantage of Intel’s latest Xeon Scalable processor and Optane memory technology,” said Charles Fan, CEO of MemVerge. “We started by providing access to new levels of performance and capacity without requiring changes to applications.”
Big Memory is Bigger, Faster, and More Available with Memory Machine v1.2
Pioneered by MemVerge, Big Memory software uniquely makes 100% use of available memory capacity while providing new operational capabilities to memory-centric workloads such as virtualized cloud infrastructure, in-memory databases, genomics, and animation/VFX. Memory Machine v1.2 adds the following capabilities to make Big Memory bigger, faster, and more available to an ever-broader set of applications:
According to Mark Wright, Technology Manager for Chapeau Studios, “Initially, we opened a poly-dense scene in Maya and it took two-and-a-half minutes. Then, we opened a scene from a snapshot we’d taken with Memory Machine and it took eight seconds.
In addition to opening exponentially faster, another benefit of the Memory Machine snapshot is that it gets an artist right to the spot in the application where they were when they created a snapshot. There’s no need to repopulate the entire application.”
Apps Sizzle on Ice Lake in Testing by StorageReview.com
Independent Lab StorageReview.com pulled together a server configured with Intel Optane 200 Series Persistent Memory, Intel Gen 3 Xeon Scalable CPUs, and Memory Machine software from MemVerge. They performed bulk insert and read tests with kdb+, as well as Redis Quick Recovery with ZeroIO Snapshot and Redis Clone with ZeroIO Snapshot.
In summary, the configuration with 200 Series PMEM, Intel Gen 3 Xeon Scalable CPUs, and Memory Machine software demonstrated 2x read performance and 3x write performance.
According to Kevin O’Brien, Lab Director at StorageReview.com, “The real benefits of PMEM show up when you can leverage it at the byte level with the appropriate software. In many cases, application developers like SAP, tune their application to be able to leverage PMEM. While that works for some applications, there’s another option.
Leverage a software-defined solution that’s built from the ground up to help businesses leverage all of the performance and persistence benefits PMEM 200 offers. To test this latest generation of PMEM, that’s exactly what we did.”
Big Memory Labs Accelerate New Technology Evaluation and Integration
Big Memory makes use of Intel Optane persistent memory and MemVerge Memory Machine software. For IT organizations and vendors that need quick access to a Big Memory environment, five Big Memory labs are now available for demonstrations, proof-of-concept testing, and software integration.
The Big Memory Opportunity
DRAM was invented in 1969. Over 50 years later, it remains expensive, scarce, and volatile. Clearly DRAM’s speed of evolution cannot keep pace with the demand of modern applications that must process large quantities of data and deliver results in real-time.
Fortunately, the invention of Intel 3D XPoint technology breathed new life into the aging segment, and marked the advent of the age of Big Memory Computing.
By 2024, almost a quarter of all data created will be real-time data, and two-thirds of Global 2000 corporations will have deployed at least one real-time application that is considered mission-critical, according to IDC.
These next-generation applications (NGAs) frequently employ big data analytics and AI/ML. The real-time workloads are found in many industries on-premises and in the cloud. Examples include in-memory databases and fraud analytics in financial services, customer profiling in social media, recommendation engines in retail, 3D animation in media & entertainment, genomics in health sciences, and security forensics, to name just a few.
The result is explosive adoption of Big Memory Computing designed for big and fast data. IDC estimates the market opportunity for persistent memory will grow from $65 million to $2.6 billion by 2023. Coughlin & Associates estimates that revenue for persistent memory will reach $25 billion by 2030, equal to revenue for DRAM.
The platform covers e entire enterprise technology space- including emerging technologies like RPA, AI, cloud, automation, and the entire gamut of digital transformation tools, strategies and management decisions.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.Media@EnterpriseTalk.com