• Ekyam Universal Reader

Ekyam’s Universal Reader’s primary function is to map the data from all connected systems to Ekyam’s proprietary Data Standards. Ekyam Data Standards are a crucial element of the platform. They represent a canonical, unified data model for all key retail entities and processes. The standardized representation ensures that the data is understood consistently within the Ekyam ecosystem. In addition, Ekyam’s Universal Reader is responsible for ingesting raw data from various source systems (including parsing EDI/iDOCs) and transforming it into Ekyam Data Standards. How does the Universal Reader transform the data into Ekyam’s Proprietary Data Standards?
  • Mapping: The Universal Reader takes raw data (e.g., a JSON payload from Shopify, an XML file from an old ERP, or rows from a database) and transforms it into the Ekyam standard format.
  • Normalization: Data is cleaned, validated, and structured according to the Ekyam Data Standards. For instance, “SKU,” “ItemCode,” “Product ID,” and “UPC” from different systems might all be mapped to a single, standardized “ekyamProductID” field.
Use Case: NetSuite Reader Mapping This use case illustrates how Ekyam’s Universal Reader facilitates the seamless ingestion and standardization of data from NetSuite, transforming it into Ekyam’s proprietary data standards for unified storage and processing. The Ekyam Universal Reader Mapping Process:
1

End-point Configuration and Verification

The Ekyam platform is configured to target NetSuite’s data. Ekyam utilizes:
  • NetSuite’s /inventoryItem endpoint to pull comprehensive Product data.
  • NetSuite’s /inboundshipment endpoint to pull detailed Shipment data image(40).png
  1. Verify button verifies if end points used are valid or or invalid. 
  2. If the verification fails, below is the message displayed image(41).png
2

Upon a successful verification, the Ekyam Universal Reader proceeds to display the Reader Mapping Screen.

image(42).png
3

Ekyam’s Standardized Collections (Left Panel) and Source System Fields (Right Panel):

→ Predefine Collections (Left Panel)The vertical menu on the left showcases all predefined sub-collections or components under the primary Product entity. These include:
  1. Pricing
  2. Variants
  3. Media
  4. Inventory
  5. Attribute
  6. Marketing etc. 
Each of these sub-collections encapsulates a logical group of related fields, allowing users to navigate and configure mappings.→ Mapping Panel (Right-Panel)The right side section presents a field by field mapping interface:Left column (Source Fields): These are Ekyam’s standard fields, which form the canonical data schema within the Ekyam platform. These fields are consistent across all integrations, serving as a “Single Source of Truth”.Right Column (Destination Fields): These dropdowns represent field from external/destination system (eg. SAPB1, Netsuite etc) that are mapped to the corresponding Ekyam standard fields. The values are either selected manually or auto-filed based on the configuration or previous mappings.image(43).png
4

Field-Level Mapping (Products Example):

  • When the “Products” tab is selected from the left panel, the screen dynamically displays the specific field mappings for product data. Here, Ekyam’s standardized keys for the Products collection are clearly defined, such as:
    • Product ID
    • Product SKU
    • Pricing Tiers
    • And many others…
5

Data Ingestion and Standardization:

  • Once the synchronization process commences, the data pulled from NetSuite’s /inventoryItem endpoint (for Products) is immediately subjected to these AI-driven key mappings. The raw NetSuite product data is transformed into Ekyam’s standardized product format and subsequently saved to MongoDB. 
  • Similarly, data pulled from NetSuite’s /inboundshipment endpoint (for Shipments) undergoes the same intelligent mapping process.
AI-Powered Data Mapping To define how fields in a source system correspond to fields in a target/destination system, or how they map to a central standard, Ekyam significantly accelerates and simplifies this process through Artificial Intelligence (AI). This AI-assisted mapping is applied in Universal Reader - mapping to Ekyam standards.
  • AI-Recommended Mappings: When connecting new systems or defining data flows, Ekyam’s AI engine analyzes the data schemas of the source and destination systems (or the Ekyam Data Standards). Based on field names, data types, and patterns, the AI recommends potential mappings.
Custom Field Mappings While Ekyam’s AI-driven Universal Reader is capable of translating the majority of the data to its proprietary Data standards, there are instances where specific fields from NetSuite may not match within Ekyam’s pre-defined standard. In such scenarios, Ekyam provides the capability for Custom Mappings.  These custom mappings are then saved in Ekyam’s database, ensuring that the Universal Reader consistently applies these specific rules during subsequent syncs. image (44).png

• Ekyam Universal Writer

Just as the Universal Reader ingests and standardizes data, the Universal Writer is responsible for delivering data from Ekyam to the various connected destination systems in the specific format and structure they expect. Ekyam’s Universal Writer takes standardized data from within Ekyam and transforms/formats it for delivery to destination systems or trading partners, including generating EDI/iDOC documents. The Universal Writer can:
  • Transform Ekyam Standardized Data: Take data held in the Ekyam Data Standards (e.g., from the Universal Ledger or as a result of a workflow) and map it to the unique schema and format required by the receiving system (e.g., a specific XML structure for an ERP, a JSON payload for a marketing automation tool, or a CSV file for a reporting system).
  • Transmit External Data: In some scenarios, data might flow directly from one external connected system to another, with Ekyam orchestrating the transfer and ensuring the data is correctly formatted by the Universal Writer for the destination system.
This capability ensures that while Ekyam uses its internal standards for processing and as a source of truth, it communicates with each external system in its native language, ensuring seamless integration and interoperability.