Getting Started with Flextract

This page will help you get started with Flextract. You'll be up and running in a jiffy!

Introduction

In the process of effective data management, handling documents, extracting granular data, and organizing it into coherent datasets are critical steps. For developers working on data processing pipelines, understanding how documents are uploaded, data is extracted, and eventually exported into structured formats is essential. This documentation provides a detailed breakdown of each stage—Documents/Uploads, Extractions, and Exports—shedding light on their roles and interconnections within a data management workflow.


🚧

For production keys, please email [email protected] or speak with your account manager (Enterprise only).

All keys and requests made via the API Reference are for the developer portal and not meant for production use. Any processing using on the developer portal will be billed accordingly.


Primary Data Model

Documents/Uploads (Raw Original Documents)

Documents/Uploads refer to the initial step of the data pipeline where raw data is introduced into the system. Typically, these uploads are in PDF format, containing a variety of information such as text, images, and tables. The PDF documents can originate from multiple sources, including scanned forms, digital reports, or generated documents.

Extractions (Granular Data)

Extractions involve the process of converting the unstructured data from the uploaded documents into a granular, usable format. This step is crucial for isolating specific pieces of data, such as individual fields, text segments, or tabular data, from the comprehensive content of the PDFs. This granular data serves as the building blocks for the final structured datasets.

Exports (Merged and Organized Data)

Exports represent the final stage in the data processing pipeline, where the granular data extracted from the documents is merged and organized into structured, coherent datasets. At this stage, the isolated pieces of data are combined according to predefined schemas or business rules, ensuring consistency and accuracy. The resulting datasets are ready for further analysis, reporting, or distribution. Typically, the data is exported in formats such as CSV, JSON, or databases, making it accessible for downstream applications and stakeholders.

Summary

The stages of Documents/Uploads, Extractions, and Exports form a seamless data processing pipeline crucial for managing and transforming raw data into valuable, structured information. For developers, understanding each of these stages is key to building efficient and reliable data processing systems. By starting with comprehensive document uploads, breaking them down into granular data through precise extractions, and then merging this data into organized exports, developers can ensure that the data is both accurate and ready for use in various applications.


What’s Next

Get started with authentication!