Skip to main content

3 posts tagged with "object mapping"

View All Tags

· 14 min read
Neel Phadnis

(Source: Photo by Jametlene Reskp on [Unsplash](https://unsplash.com/) ) Source: Photo by Jametlene Reskp on Unsplash

Aerospike Database and the client API provide a rich set of capabilities that have evolved over more than a decade through an increasing number of mission critical deployments. This post provides a high level view of the Aerospike architecture and API to give developers a broader understanding of its architecture and capabilities, and help them become more productive and effective. This post also points to resources for further exploration of specific areas.

· 20 min read
Neel Phadnis

(Source: Photo by Pietro Jeng on [Unsplash](https://unsplash.com/) ) Source: Photo by Pietro Jeng on Unsplash

This post focuses on the use of Collection Data Types (CDTs) for data modeling in Aerospike with a large number of objects. This is Part 2 in the two part series on Data Modeling. You can find the first post here.

Context

Data Modeling is the exercise of mapping application objects onto the model and mechanisms provided by the database for persistence, performance, consistency, and ease of access.

Aerospike Database is purpose built for applications that require predictable sub-millisecond access to billions and trillions of objects and need to store many terabytes and petabytes of data, while keeping the cluster size - and therefore the operational costs - small. The goals of large data size and small cluster size mean the capacity of high-speed data storage on each node must be high.

· 13 min read
Neel Phadnis

(Source: Photo by NASA on [Unsplash](https://unsplash.com/) ) Source: Photo by NASA on Unsplash

Introduction

Data Modeling is the exercise of mapping application objects onto the model and mechanisms provided by the database for persistence, performance, consistency, and ease of access.

Aerospike Database is purpose built for applications that require predictable sub-millisecond access to billions and trillions of objects and need to store many terabytes and petabytes of data, while keeping the cluster size - and therefore the operational costs - small. The goals of large data size and small cluster size mean the capacity of high-speed data storage on each node must be high.