aqfer data lakes are configured to provide your business with the same enterprise level capacity necessary to host the high volume, log level, detailed data sets Ad Tech Vendors use to actually build reports other CDPs simply gather from Interfaces and APIs.
Most companies self-identifying as CDPs don’t provide detailed data. They collect data that’s been pre-processed by vendor reporting systems. Then they either store it, or simply move it from one partner to another.
By contrast in aqfer data lakes:
aqfer Data Lakes factor for and maintain complex principles of Event composition.
Numerous low level Data Mobilization Services Position themselves as CDPs:
Most companies self- identifying as CDPs focus on simply moving around pre-processed vendor data using scheduled automation.
If a user enters a tunnel while watching a video, or skips between cell towers, actions associated with the video playback can get delivered from the user’s device to your CDP separately. This can lead to measuring multiple events or users when in fact there is only one.
The impact of disruptions in time sync at the atomic, user level, becomes even more pronounced when the reason events reach your CDP out of order involves a large level disruption or delay in data transfer from a vendor.
Aqfer’s enterprise level data processing infrastructure is designed with advanced data reconciliation protocols to easily facilitate complex reintegration of data elements that can become lost or applied out of sync when working with other CDP systems. Incomplete events and late arriving sub events are accounted for with distinct collation, quarantine and reprocessing stages. These design elements are not even provided by most other systems claiming to be CDPs.
The aqfer Data Integration service handles importation of data from partner vendor data sources. It can operate on a periodic batch or continuous streaming basis and handles curation, verification, and translation of the data to your target data lake schema
The aqfer Collation service is the heart of the aqfer Data Lake. Our highly optimized Collation service takes recently processed results from the Data Integration Service and continuously reorganizes it to create user-centric physical data structures that make audience analytics efficient and faster.
The Data Distribution service manages distribution of audience data and syndicated event data to activation partners and subscribers. Data can be scheduled for delivery based on selection criteria or sent or pulled on demand in bulk.
The Analytics service allows interactive query and reporting against an optimized data mart. Key data structures include semi-summarized user events and audience segments in columnar format. Visualization tools such as Amazon Quicksight and Tableau are available and can be easily integrated.
aqfer’s support for H2O Sparkling Water and SparkR provide a wide range of supervised and unsupervised methods for training models.
Once your models are developed, they can be used to score and infer predictions, such as classifying users into segments. These are then channelled back into the data lake audience tables and made available for activation and distribution using the Data Distribution service.
aqfer Tag Manager aqfer Graph aqfer IO