Importing

Importing allows you to import from a variety of different formats, from a variety of different sources (using Data Mappings), using a variety of different steps (with Import Definitions) in variety of different ways. It is a complete offering of data integration tools that are so powerful they not only can be used to easily integrate data from other systems, but also to automate other processes such as internal testing.

Importing Features

Importing allows you to select any source file and import that into your solution. If you want to import from a source other than a single file, such as a SQL Stored Procedure, see Import Definitions for more options. Imports are particularly powerful as they have the following core features:

  • Can be used to import, update or delete records, and also run custom processes.
  • Can be run locally in the client
    This is designed for importing small sets of files that you want to test first before importing large data sets as a task. This way you can easily view the imported data in your application to make sure the small set of data imported precisely the way you wanted it to. If the data is incorrect you can delete it, make your changes, import again and see.
  • Can be run against the task servers
    Improves performance by offloading the process to dedicated task servers. You're provided progress updates and are free to work on anything else while the process is importing without your local machine being busy. See our Server Tasks for more info.
  • Provides 3 main methods of importing (available on both client and task server):
    • Validate Only
      Processes your entire import file and then applies all the validation logic (including your own custom logic) to all the data. Reports any validation failures, but doesn't commit the imported data back into the database.
    • Sequential
      Processes your import data a line at a time and then saves the changes of that import line. If the save fails, it logs the problem and moves onto the next line. You'll be provided a list of failed imports and the rest will have imported without issues.
    • Batch
      Sometimes you want all the import data lines to be entirely imported correctly without any issues, or not at all. The Batch process works through your data importing a line at a time, but doesn't save the changes until the very end. If anything fails the system reports the failure and prevents anything from being saved, thus nothing will have been imported at all. If the save succeeds then all the data was imported successfully with no errors.

Standard Exporting

Our importing capability is made all the more useful by the fact all of our standard grids throughout the entire application export data into Excel (or csv if not using the Full Desktop Application) using the Export Template format for that grids entity type. This means you can immediately make changes in Excel and simply import your changes back into the system with a single click.

This provides a consistent experience throughout the application giving users the power to easily work with data the way they want to, or are used to. If they want to update hundreds of rows of data, they can export to excel make their changes and import back into the system. They don't need to concern themselves about doing anything technical or special to enable this functionality - it just works.


Import Process

When importing data from any source (files of type csv, xls or data sources such as sql) the import routine goes through several steps:

  • First the type of data being imported needs to be identified. This is taken care of by using Data Mappings, Import Definitions, or Export Templates.
  • Next the system parses the file
  • If it is a xml file it imports it as is
  • If it is a csv/xls file it goes through every row in the file to be imported
  • If the row is marked as not being for import [see more in Export Template Column Types - Import Data Action] it is skipped.
  • Next it parses all the key fields in the import to identify if the record already exists or not. See more in Export Template IDs for explanation on identifying key columns of entities. The standard keys can be overridden to custom Data Sources, or other fields, using Data Mappings if so desired.
  • If the record exists it loads it, if it doesn't it creates a new one (Running any Default Values that are setup for that entity type too). 
  • If the action is marked to delete the record, then it marks the record for deletion
  • If the action is marked to import then all the other import columns in the file have their values applied against the new, or loaded, record.
    • To set a field to actually hold null, you can use the following value:
      • <Null>
    • Any formulas in data mappings are evaluated during this process too.
  • If the import columns have a Custom Process then the system invokes that custom task. See our API documentation for adding custom actions to Export Templates that can be invoked during Imports.
  • Identifies if the import caused any changes
    • A change occurs if a new record was created
    • A change occurs if any of the imported values are different to any of the existing values the record held - i.e. the property actually changed value
    • A custom process resulted in this, or other records, being created/changed
  • If changes were identified then it moves on to process the imported record depending on the import type running:
    • Validate Only: Runs validation on the imported record. Logs any failures.
    • Sequential: Attempts to save the changes. If the save fails it is logged, and then moves on to processing the next import line.
    • Batch: Does nothing until the all rows are processed at which point it saves all changes at once either succeeding, or failing and logging why.

You can hook into this process using our API. You can create Custom Processes that can invoke algorithms to be run from the import routines.
You can also completely change the fields that are available to be imported using the  EntityDescriptorTemplateExtender class and IEntityDescriptorImportExtender interface to define custom entity properties and how to import them. This is used by the system to provide custom Transaction importing columns for every Transaction Value Type you create in the system. See our API documentation for more detail.


Import Data Format

The source data being imported must conform to our Export Template format for that entity type for all the columns required to be imported. See our Export Template documentation for more information about all the columns found in import files.

Alternatively you can use Data Mappings to translate any data format (with column headers or not) into the equivalent fields expected by the import routine.


Views

Importing data doesn't require any views in the system. It is a process that can be invoked from commands, our API, or any kind of server task.

When invoked from the UI the system prompts you for a source of data to import from a file, and follows the following steps:

  • Ensures the file exists
  • If it is an excel file it parses the file and loads the data using the named ranges in excel. [See our Export Templates for more detailed information on excel]
  • If it is an xml file it just imports it as is, xml packages contain all the information required to import the data.
  • If it is a csv file the system initially tries to parse the file using a Unicode format (the default expected format)
    • If this fails the system prompts you for the type of entity being imported.
    • If it passes and can read the file but can't determine the type of data being imported, then again it prompts for the entity type being imported
    • If it can determine the type of data being imported, but data mappings exist for that entity type then it will prompt if you want to use a data mapping.
  • Once it has the data to be imported, and any data mapping it starts the import process described above.

Importing data can also be achieved from Import Definitions, which gives a more prescriptive method for defining the type of data source, and offers more data sources (such as sql stored procedures). 

If the import task is running as a task then the system will display the progress of the task to you as it is being processed. You can always check in on the progress of any task through the Server Tasks view.

 


Advanced

Importing data is a very common function across most software solutions, even existing legacy financial services software products. It is vital that most solutions provide functionality around being able to integrate your solution with other 3rd party software products. 

Importing is also used extensively throughout the implementation phase of any onboarding project. This is so critical to the process that implementation times are often directly related to the capabilities of ETL processes provided by the software solution itself. You can see this clearly illustrated by the number of 3rd party tools that exist purely to help within the data transformation and integration toolspace to get your data out of one system and into another.

Legacy software providers are notoriously bad at providing the tools to ensure you can automate as much of these processes as possible. Combining importing routines, Data Mappings and Import Definitions we automate as much of this problem domain as possible accelerating your implementation time. More than that our tools are so capable they allow you to repurpose the technology towards solving other problems such as automated testing [more info].

If that wasn't enough we provide one more layer of capability over all our Data Integration tools to ensure you can get through these processes as quickly, efficiently, and accurately as possible. Our Canvas [more info] technology means you can immediately create a test copy of your entire system instantly. This solves many problems within financial services (such as modelling, etc [more info]) however for data integration it provides a great way to test the accuracy of your integration routines in a live environment. Follow these simple steps, to instantly understand the impact of your data integration:

  1. Create a new Canvas
  2. Log into that canvas
  3. Run any imports, or import definitions, you want to test.
  4. Run any reports, reconciliations, tests, etc you want to run to ascertain the validity of those imports.
  5. If the imports, or data integration, works the way you expect then just promote the changes in that canvas to the live system - you're done. You can now re-run these processes later knowing they are configured correctly and worked on this data.
  6. If the imports didn't work correctly:
    1. Identify the issues
    2. Ditch the Canvas - we don't care about it anymore
    3. Update your import definitions, mappings, data, etc from the issues identified.
    4. Go back to Step 1 and start again.

That's it. In a simple Canvas you can repeatedly test you data integration steps in a live production environment without worrying about processes being configured incorrectly. You can just keep ditching your canvas until you've refined the process to work correctly. 
Data Integration made simple.