/
Manage data import and processing runs
  • Verified
  • Manage data import and processing runs

    Table of contents

    Overview

    The Data Pipeline page, which is accessible under the Data section in the main navigation, provides a high-level overview of the status of your importing and accounting data processing runs, organized in the sequence by which they occur.

    From this page, you can:

    • Monitor the status of your data import and processing runs

    • Manually initiate data import and processing runs for both current and historical data

    • View details for your latest 200 imported raw data objects

    This document describes how to use the Data Pipeline to monitor your data pipeline and manually import and process data.

    Understanding your data import and processing runs

    Leapfin processes data in two phases:

    Data processing phase

    Description

    When does this occur?

    Data processing phase

    Description

    When does this occur?

    Data import

    During a data import run, raw data is ingested from your integrated billing and/or payment systems. Note: Data ingestion via API or cloud storage operates independently and does not require an import run. Once data is pushed into Leapfin via API or cloud storage, you can immediately process it by running an Accounting job.

    See the Data input section of the data processing overview.

    • Leapfin processes data automatically every 24 hours, based on the schedule established during your initial implementation. You can work with Leapfin to customize the schedule of automated runs.

    • You can also click Run Job or Run Backfill to manually data import and accounting runs. See the Manually import and process your data section below for more details.

    Data processing

    Once your data is sent to Leapfin’s core engine, it undergoes a multi-step transformation process that ensures it is complete, consistent, and accountable. This includes:

    • Data mapping

    • Enrichment rule application

    • Accounting rule application

    See the Data processing section of the data processing overview for more information.

    Note: Data processing runs only process data in unlocked periods.

    For both data import and processing runs, the format of the data displayed is broken down into two categories:

    • Run status graph

    • Run details

    image-20240228-172801.png

    Run status graph

    This graph displays your last 30 data import and processing runs.

    • Each bar represents a separate processing run. Click the bar to view the details for that run.

    • The color of the bar indicates the state of the run.

      • Light green: Running

      • Dark green: Success

      • Red: Failed

    • The bar's length indicates the run's duration in relation to the other processing runs displayed. Bars for in-process runs will grow in real time to match the current run duration.

    • Under the graph, you can view the success rate of your last 30 runs.

    Run details

    This section provides details about your data processing runs. Click each bar to view the details for that run.

    Field

    Description

    Field

    Description

    Date

    The date of the data processing run. The date format is as follows: YYYY-MM-DD.

    State

    The current status of the data processing run. Possible states include:

    • Success

    • Failed

    • Running

    Duration

    The current duration of the data processing run.

    • If the state of the run is Success or Failed, the duration will not change.

    • If the state of the run is In progress, the duration will increase in real-time until a Success or Failed state is reached.

    Run type

    Indicates how the run was initiated. Possible run types include:

    • Scheduled: The run was scheduled to initiate on a specific date and time.

      • Note: Currently, Leapfin schedules all data processing runs. Please reach out to Leapfin Support if you need to adjust your run schedule.

    • Manual: The run was initiated manually by clicking the Run Job button.

    Start date

    The date and time the run was started.

    Using 2024-02-21T18:12:59-07:00[America/Denver] as an example, the start date format is as follows:

    • Date and time: 2024-02-21T18:12:59 indicates the date February 21, 2024, and the time 18:12:59 (6:12:59 PM) in a 24-hour format.

    • Time zone offset: -07:00 represents the time zone offset from Coordinated Universal Time (UTC). In this case, it is 7 hours behind UTC, indicating the Mountain Time Zone (MST or MDT, depending on daylight saving time).

    • Time Zone Identifier: [America/Denver] provides the specific time zone identifier, indicating that the time is in the "America/Denver" time zone. This is the time zone used for Denver, Colorado, USA.

    End date

    The date and time the processing run ended.

    • If the run state is Running, this field will remain blank until the run succeeds or fails.

    • This date follows the same format as the start date.

    Data interval start date

    The start date for the data period you are processing.

    Data interval end date

    The end date for the data period you are processing.

    Full date

    The date and time of the data processing run. This is the same date shown for the Date field, with the time included. This date is unique. If the run was initiated manually, the Full date will show the time the run was initiated. If the run was scheduled, the Full date will be the same as the period start date.

    Manually import and process your data

    Leapfin imports and processes data automatically every 24 hours, based on the schedule established during your initial implementation. While you cannot directly alter your automated data run schedule in the application; you can manually initiate data import and processing runs for current and historical data. Manual runs will not impact your automated import/processing schedule.

    image-20240607-143825.png

    Data import

    Use the following steps to manually initiate an importer run for current and historical data:

    Data type

    Process

    Data type

    Process

    Current

    Click the Run Job button under the Importer graph. The button will be disabled until the importer run is complete.

    Note: Importer runs only ingest data from your integrated billing and/or payment service providers. Data ingestion via API or cloud storage operates independently and does not require an Importer run. Once data is pushed into Leapfin via API or cloud storage, you can immediately process it by running an Accounting job.

    Historical

    1. Click the Run Backfill button under the Importer graph.

    2. Select an Integration for the data you are importing. Currently, Leapfin only supports import backfill for Apple, Google, PayPal, and Stripe.

    3. Select the date range of the data you want to backfill.

      1. The start date is inclusive while the end date is exclusive and is based on the record date. The timestamps are 00:00:00.000 UTC for the start date and 24:00:00 UTC for the end date.

    4. Click Run Backfill. The button will be disabled until the backfill is complete.

    Data processing

    Use the following steps to manually initiate an accounting (data processing) run for current and historical data. Accounting runs process data for all integrations. Note: Accounting runs only process data in unlocked periods.

    Data type

    Process

    Data type

    Process

    Current

    Click the Run Job button under the Accounting graph. The button will be disabled until the accounting run is complete.

    Note: Accounting runs only process data imported into Leapfin within the last 24 hours.

    Historical

    1. Click the Run Backfill under the Accounting graph.

    2. Select the date range of the data you want to backfill.

    3. Click Run Backfill. The button will be disabled until the backfill is complete.

    Understanding data processing time frames:

    Leapfin processes historical data based on the date the record was received. For example, imagine you ingested data for all of 2023 on January 1, 2024. If you want to run an accounting backfill for this entire range of data, you would set the accounting backfill date to January 1, 2024, as this is the date Leapfin received the data. If you set the range for any point in 2023, Leapfin would not process the data, as it had not yet been received.

    View your raw ingested data

    Raw data refers to data ingested directly from a data source, such as a payment processor or API. This data has not been transformed into standardized accounting records or had any rules applied.

    The Latest Raw Data table displays the last 200 raw data objects ingested from all data sources. You can use the Select Integration drop-down to view raw data by data source.

    image-20240607-154020.png

    Latest Raw Data table column descriptions

    The Latest Raw Data table displays the following information for your raw ingested data:

    Column

    Description

    Column

    Description

    Integration Id

    The identifier for the integration.

    External Id

    The unique identifier for each individual ingested object. This Id is assigned by the data source, not by Leapfin.

    Raw Data Type

    The type of raw data ingested. For example, if you push data via an API, the raw data type would be leapfin-core-data. If data is ingested via a payment processor, the raw data type would reflect the object

    Data

    The raw data contained within the ingested object. You can double-click this cell to view this data in an expandable tree view or as raw text.

    Click Copy Cell Value to copy the displayed data. This functionality is particularly useful when creating custom data mapping scripts on the Data Mapping page, as you can paste raw data samples from this cell in the script creator and use them to test your data mapping scripts.

    image-20240424-145122.png

    Runs Updated At

    The date and time the data was created or last updated. Times are displayed in UTC.

    Related content