Trillium Quality

Standardisation and deduplication of data is out of the spotlight and easily overlooked at the same time. However it is underestimated and still one of the biggest issues in many data projects.

The ability to adequately create and apply a data quality proces that is configured according to your specific needs is crucial in single customer view, master data management and data migration projects.

  • Access, prepare, cleanse, and standardise data.

  • Apply pre-built or custom-built enterprise data cleansing, validation, linking, and enrichment templates.

  • Add missing postal information, latitude/ longitude coordinates, and other reference data elements.

  • Match and deduplicate records and uncover relationships within households, businesses, and accounts.

  • Assemble or consolidate to a “best” record with automated, intelligent selection of the best elements.

  • Apply the same rules in batch processing of large data sources as well as integrated online in applications like Microsoft CRM, SAP, Salesforce.

Features

Jump-start wizards and templates.

  • Comprehensive library functions for reuse of rules, processes and workflows.

  • Methodology-driven GUI to accelerate data quality development.

  • Dynamic processing of complex, mixed-country data sets.

  • Multidomain support for customer/party, product, assets, and financial data.

  • Enrichment capabilities for demographics, firma-graphics, and geocodes.

  • Support for all major languages and multiple data in encodings, including ASCII, EBCDIC, and Unicode.

  • GUI in local language.

  • API support for all major standards: C, C++, C#, Java, Soap, .NET, XML.

  • Integrated suite of data discovery, profiling, parsing, standardization, matching and monitoring functionality.

  • Interactive summary views, charts and graphs.