Design Your AppData Model Structure

Designing Your Data Model In Mindbricks

Data modeling is the foundation of every successful application in Mindbricks. At its core, a data model represents your business domain as structured, semantic entities that flow through your entire architecture. In Mindbricks, data models aren't just database schemas—they're intelligent patterns that define how information is stored, validated, accessed, and transformed across your microservices.

Pattern shape — explicit-mode is the default for all new projects (Q2 2026 onwards). A data property carries DB-shape concerns (type, isRequired, isArray, enumOptions, defaultValue, relationSettings, indexSettings) plus two framework auto-binding hooks (sessionSettings, staticJoin). Source-of-value for the API is decided per-API on the BusinessApi (via requestParameters + dataClauseItems), not at the property level. The slim form below is the documented shape.

Explicit-mode property shape (new projects)

{
  "basicSettings": {
    "name": "title",
    "type": "String",
    "isArray": false,
    "isRequired": true,
    "isFilterParameter": false,
    "enumOptions": null,
    "defaultValue": null
  },
  "relationSettings": null,
  "sessionSettings": null,
  "staticJoin": null,
  "indexSettings": { "indexedInDb": true, "indexedInElastic": true, "unique": false }
}

Activation by block presence. A property's behavioral hooks activate by block presence: relationSettings populated means the property is a foreign key; sessionSettings populated means the value is session-bound; staticJoin populated means the value is resolved from another data object at write time. Absent or null means the hook is off. sessionSettings and staticJoin are mutually exclusive on a single property; the auditor enforces this.

isRequired is DB NOT-NULL only. Whether the API caller has to supply the field is decided per-API via requestParameters[].isRequired on each BusinessApi. Property-level isRequired is purely the database column constraint.

defaultValue (DB-level scalar default). The database fills this when no write supplies a value. Most useful for NOT NULL columns where you want a sensible system default without forcing every write path to think about it. Only scalar literals here — computed defaults like now() or LIB.<fn>(...) belong in per-API dataClauseItems MScript on the BusinessApi.

sessionSettings (auto-bind to session). When set, the framework auto-injects the value from session[sessionParam] at write time. Properties with sessionSettings are NOT declared in requestParameters (callers can't supply session values) and NOT in dataClauseItems (the framework writes them). sessionSettings.isOwnerField: true marks the data object's ownership field — used by ownership-based access-control checks.

staticJoin (auto-bind via cross-object lookup). When set, the framework runs a join with another data object at write time and stores the joined value on this record. Optional staticJoin.contextParameterName exposes the joined value as this.<contextParameterName> so per-API dataClauseItems can compose with it. As with sessionSettings, these properties don't appear in requestParameters or dataClauseItems — the framework handles them.

For the per-API decision tree (which properties to declare in requestParameters vs dataClauseItems vs skip), see the Building Your API guide's explicit-mode quick reference.


What is a Data Model in Mindbricks

In the Mindbricks ecosystem, a data model is defined through DataObject patterns within your service definition. Each DataObject represents a distinct entity in your business domain—whether that's a User, Product, Order, or any other concept central to your application. These objects are defined semantically in your service JSON, following the Mindbricks Pattern Ontology (MPO). Human architects can create and modify these data models through the visual design interface, while AI agents can work directly with the JSON representation—both approaches produce the same structured outcome.

Unlike traditional database schemas, Mindbricks data models carry rich semantic meaning. They don't just define fields and types; they encapsulate business rules, relationships, validation logic, and access patterns. This semantic richness allows both human architects and AI agents to understand the intent and purpose behind each data entity.

The role of MPO in data modeling

The Mindbricks Pattern Ontology (MPO) provides a structured framework for defining data models that are both human-readable and machine-processable. Within the MPO, DataObject patterns follow specific conventions that ensure consistency, maintainability, and scalability.

While the underlying structure is represented as JSON, human architects interact with these models through an intuitive visual design interface. This UI representation translates complex JSON structures into forms, diagrams, and interactive editors—making data modeling accessible without requiring deep JSON expertise. Any JSON path referenced in documentation also corresponds to a specific menu path or form field in the UI, allowing seamless navigation between documentation and the design interface.

When you define a data model using MPO patterns, you're creating more than just a database table—you're establishing:

  • A semantic blueprint for your business entities

  • A validation framework that ensures data integrity

  • A foundation for API routes and access controls

  • A source of truth for cross-service communication

  • A component that Genesis can compile into production-ready code

The MPO approach eliminates ambiguity in your data definitions. Each property, relationship, and validation rule is explicitly defined in a structured format that both humans and AI can understand and manipulate. This structure helps prevent errors that might occur with manual JSON editing, as the UI guides users through valid options and configurations while providing immediate validation feedback.

How data models form the foundation for services and APIs

In Mindbricks, data models serve as the cornerstone upon which your entire service architecture is built:

  1. Service Structure: Each microservice typically centers around one or more related DataObject patterns that define its domain responsibility.

  2. API Generation: Your data models directly inform the shape of your API. The properties you define and the relationships you establish become the backbone of your service's interface.

  3. Validation Layer: The constraints and rules you define at the data model level automatically translate into validation logic throughout your application.

  4. Business Logic: Custom behaviors, computed properties, and data transformations start at the model level before extending into routes and controllers.

  5. Cross-Service Communication: Data models define the contracts for how information flows between services, ensuring consistent data handling across your architecture.

By investing time in thoughtful data modeling, you create a solid foundation that simplifies downstream development. Well-designed data models lead to intuitive APIs, consistent validation, clear business logic, and scalable services. They enable both human developers and AI agents to collaborate effectively by providing a shared understanding of your application's domain.

In the following sections, we'll explore how to create, structure, and optimize your data models within the Mindbricks framework—starting with the basic concepts and progressing to advanced modeling techniques that leverage the full power of the MPO.

Core Concepts

The foundation of data modeling in Mindbricks centers around a set of key concepts that define how data is structured, validated, and related. Understanding these core elements will provide you with the necessary framework to build robust data models for your applications.

Understanding DataObjects

A DataObject in Mindbricks is defined according to the MPO as an object with objectSettings, a list of properties. Here is a minimal MPO-compliant example:

{
  "objectSettings": {
    "basicSettings": {
      "name": "product",
      "description": "Represents a product in the catalog.",
      "useSoftDelete": true
    },
    "authorization": {
      "objectDataIsPublic": false,
      "objectDataIsInTenantLevel": false
    }
  },
  "properties": [
    // DataProperty objects go here
  ]
}

In the UI, you create a new DataObject by navigating to your service definition and selecting "Add Data Object". Each DataObject requires:

  • A unique name (lower camelCase, e.g., product)

  • A description

  • A set of properties (see below)

  • Optional settings for authorization, caching, and more

Properties and Types

Each property in a DataObject is a DataProperty object built from basicSettings plus optional relationSettings, sessionSettings, staticJoin, and indexSettings blocks. Here is an MPO-compliant property example:

{
  "basicSettings": {
    "name": "price",
    "type": "Double",
    "isArray": false,
    "definition": "The retail price of the product.",
    "isRequired": true,
    "allowUpdate": true,
    "isFilterParameter": false,
    "enumOptions": null,
    "defaultValue": 0.0
  },
  "relationSettings": null,
  "sessionSettings": null,
  "staticJoin": null,
  "indexSettings": {
    "indexedInDb": true,
    "indexedInElastic": true,
    "unique": false
  }
}

The type field must use a value from the MPO DataTypes enum, such as String, Text, Integer, Boolean, Double, Date, Enum, etc. defaultValue is a DB-level scalar default — used by the database when no write supplies a value, most commonly for NOT NULL columns. Computed defaults (like now() or library calls) belong in per-API dataClauseItems MScript, not here.

Relationships between Data Objects

A relationship is declared by populating the relationSettings block. There is no boolean flag — block presence is the activation. For example, a foreign key to a category object:

{
  "basicSettings": {
    "name": "categoryId",
    "type": "ID",
    "isArray": false,
    "definition": "Reference to the product category.",
    "isRequired": true,
    "allowUpdate": true
  },
  "relationSettings": {
    "relationName": "category",
    "relationTargetObject": { "name": "category" },
    "relationTargetKey": "id",
    "relationTargetIsParent": true,
    "onDeleteAction": "setNull",
    "relationIsRequired": true
  }
}
  • One-to-One: A property with relationSettings populated and isArray: false.

  • One-to-Many: The related object holds a property referencing the parent (e.g., many products reference one category).

  • Many-to-Many: Use a join object (e.g., productTag with productId and tagId).

  • Self-Reference: The relationTargetObject can be the same as the current object.

In the UI, these relationships are visualized and can be created by linking objects together.

Data Validation Patterns

Property-level validation in Mindbricks focuses on ensuring that each property is present (if required) and matches the specified data type. For example, you can enforce that a property is required and must be a string or integer:

{
  "basicSettings": {
    "name": "email",
    "type": "String",
    "isArray": false,
    "definition": "User's email address.",
    "isRequired": true
  }
}

At the property level, validation is limited to:

  • Nullability: Whether the property is required (isRequired: true)

  • Type Control: The value must match the specified type from the MPO DataTypes enum

For more complex validation—such as cross-field checks, business rules, or conditional logic—Mindbricks recommends implementing these in the route logic (e.g., using route validations or hooks). These advanced validation patterns will be covered in detail in the next document, "Building Your API with CRUD Routes."

The Data Object Lifecycle

DataObjects in Mindbricks follow a lifecycle managed by the platform:

  1. Creation: Via a create API

  2. Validation: Enforced by property and object settings

  3. Persistence: Managed by the service's data model

  4. Retrieval: Via get API or list API

  5. Update: Via update API

  6. Deletion: Via delete API (soft or hard delete)


Understanding these core concepts provides the foundation for creating effective data models in Mindbricks. In the next section, we'll explore how to create your first data model using these principles.