The Future of In-Memory Computing | Straight Talk

SUBSCRIBE NEWSLETTER

The latest insights from your peers on the latest in Enterprise IT, straight to your inbox.

By Dr. Ganapathi Pulipaka, CEO, DeepSingularity

This article is by Featured Blogger Dr. Ganapathi Pulipaka from his Medium page. Republished with the author’s permission.

In today’s world, the cost of managing the data is high and the rewards are minimum, as traditional RDMBS tools cannot provide the real-time data customers want. The world is experiencing a big data explosion and corporations are unable to tap into the right opportunities with informed decision-making. Current memory solutions are scalable but they require massive investment in the infrastructure. The world needs alternative memory solutions.

The main features of future memory solutions consist of power consumption, cost, time to market, density, scaling, and performance. NAND (Not AND), 3D NAND (Three-dimensional Not and), PCRAM (Phase-change memory), STT-RAM (Spin-transfer torque random access memory), and ReRAM (Resistive random-access memory) are the future memory alternatives to DRAM (Dynamic random access memory). The alternatives of using DDP (Dual die package) and TSV (Through-silicon via) in DRAM have cost ramifications. TSV has a projected cost value of 22% higher than DDP.

The complexities of in-memory computing arise from the economies of flash storage. DRAM memory used by SAP HANA is an upscale alternative, as opposed to classical RDBMS disk-based systems. Several alternative emerging growth memory solutions explored as part of future memory technologies can replace DRAM.

In 2010, Tokyo Institute of Technology has re-invented data storage by encoding the data with laser throughout the liquid crystals for permanent storage. CDs (compact disc) and DVDs (digital video disc) can store the data on the surface; however, these only last up to few decades. In 2012, Hitachi announced that they’ve stored data permanently on quartz glass plate that can withstand a temperature of 1,832 F. It is also waterproof. Andra, a French nuclear waste management agency is in the process of creating data on sapphire and platinum discs that can last up to 10 million years.

In 2013, researchers at the University of Southampton in the UK have begun storing data on five-dimensional silica glass discs with femtosecond laser. They’ve demonstrated that each disc can hold 360 TB of memory. This information can stay forever and does not get destroyed, unlike CDs and DVDs. The invention of permanent storage can resolve many of the challenges of data storage but it does not address the problems associated with the speed of the retrieval of the data.

Organizations are unable today to mine the data to perform real-time analytics, process the information, and harness the power of hardware with economies of scale. Aerospike jumped into the fray of in-memory computing with hybrid memory (DRAM and flash) providing depth of data analytics to spot trends in the business. For a corporation, Aerospike reduced the footprint of servers from 184 to 10 and the cost from $2.5 million to $236,000.

SAP HANA was built from the ground up to run entirely on DRAM. The cost of DRAM chips is going down every year but flash memory is highly cost-effective as opposed to DRAM. As a result, the future outlook of in-memory databases appears to tilt towards flash memory. Recent research has shown that the cost of data in flash for 2 TB database is $73, as opposed to data in memory where the cost is $1,209. The cost-benefit ratio is 1:17. ​

References

Pulipaka, G. (2015). Big Data Appliances for In-Memory Computing: A Real-World Research Guide for Corporations to Tame and Wrangle Their Data (2 ed.). Los Angeles, CA: High Performance Computing Institute of Technology.

Originally published on Medium.