Integration quality guidelines
  • 22 Feb 2024
  • 9 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

Integration quality guidelines

  • Dark
    Light
  • PDF

Article summary

Of course you are free to write code the way you want to but there are a few guidelines we expect to be honoured for applications that are using our platform. These are important and failing to comply to these could mean we do not allow your API integration on our production environment. In extreme cases, it could also mean your API token is temporarily revoked when we discover that your API integration compromises our production systems with too heavy loads because of flaws in it's behavior. To help you write solid API integration code, we like to share our best practices. During a certification process, we will definitely take these items in consideration.

Manage your batch sizes

Entities that can contain a lot of records contain at least two parameters named Offset and RecordCount. These are probably fairly self explanatory but are essential for a good performing integration. When requesting data you start with an offset of 0 and raise the offset with the amount of rows you retrieve each iteration. When the last iteration is reached you’ll receive less records than requested which indicates there is no longer any data available. Don't send an extra request in such situations, the fact you are receiving less records as requested means you have reached the last ones.

The advised recordcount size is a bit dependant on the actual table that is being requested but for most tables between 5000 and 10000 makes for a decent amount. You can try different numbers to see what suits your application the best but please refrain from using extremely large or small batch sizes.

Large requests take a long time to complete and could put a heavy workload on our servers all the while placing unnecessary locks on other requests or components. Small batch sizes however mean a lot of requests are being used to retrieve a fairly small amount of data. Because the amount of free requests that are available within the customers license is limited this could mean additional costs are invoiced due to inefficient coding.

Use timestamp based synchronization

Almost every table in the Retail3000 ecosystem contain a LastModifiedDateTime property which, as the name suggests, indicates when that record was modified for the last time. In addition, most requests or filter objects contain a property with a name similar to LastModifiedDateTimeFrom. This property can be used to only retrieve records that have been modified since the last time you requested data.

The best way to create a timestamp based synchronization loop is to locally store our server date and time at the start of your synchronization run. If you would save the timestamp at the end of your run, you will miss changes that were made between the moment you’ve requested the data and the moment that processing is completed. It's necessary to request our server date and time, to prevent small differences in your local time versus our server time, not to speak about timezone differences. There is a method GetServerDateTime which will return our date and time.

Retrieve the server date and time you haved stored locally at the end of the previous run and use it as the LastModifiedDateTimeFrom parameter (or similar) in the request. This parameter works in addition to the Offset and RecordCount parameters mentioned before. If for example 27500 records were modified since the last synchronization you should execute three requests with the same LastModifiedDateTimeFrom and RecordCount values, which we set to 10000, but with an Offset of 0, 10000 and 20000 respectively. On the third request, you will only received 7500 records while requesting 10000 records, this indicates no more data is available. Hereafter, locally save the server date and time value you have requested before starting the synchronization so this value will be used in the next synchronization sequence.

In rare cases you encounter a large table without the LastModifiedDateTimeFrom property then please let us know. We want to provide a consistent interface and will gladly expand filter objects if this property is missing.

It's absolutely a reason to decline a production certification when we discover that the date and time request is not handled properly. We have seen situations where API integrators used their own datetime, but also situation where such datetimes where kept on the first day of the current month or week during that whole period. By doing so, on every synchronization we need to send the same data again and again, which is a waste of resources on our side.

Think about the initial fill

One of the things 3rd party developers often encounter is problems during the initial fill of a newly connected integration. A lot of data is stored in the Retail3000 ecosystem and tables with millions of records per retailer is by any means no exception. Depending on the type of integration this could mean that all these records need to be obtained initially when connect your application for the first time. You should take the time to work out a routine that is capable of handling that amount of data in a fool-proof way. 

One of the more important things to accommodate is interruption of the initial fill and recovery from it. We’ve seen on a number of occasions that applications keep running a complete initial fill cycle if a minor connection error or timeout occurs. A better approach would be to keep track of the progress and pick up the initial fill cycle where it stopped. It's also strongly advised not to abort on a single connection error or timeout. Just catch such exceptions and try again. To prevent trying forever, you should abort after 2 or 3 times. Otherwise it might happen that your API integration keeps requesting large sets of data on our system forever, flooding our resources pools. This is one of the reasons why we might be forced to revoke an API token from a production environment when we see this happening.

Also be sure to test run the initial fill to avoid surprises on the ‘go live’ date. This could take up to several days on large datasets so it is a good thing to know the duration in advance.

Deleted data

Of course data isn’t permanent and records will be deleted. Since you’re not retrieving all records over and over again you need a way to know which records were deleted in the meantime. Every record that gets deleted is recorded in the DeletedItems entity. This entity consists of a table name, ItemId of the deleted record and a timestamp of the deleted record. It can be retrieved by performing a call to GetDeletedRecords by using either the filter or request parameters.

If you incorporate this table in your synchronization routine and delete the records that are returned each synchronization you should stay in sync with our data. Bad practice is to retrieve all the records from an entity from time to time, to make a comparision of all our records against your table.

Create smart requests

Once you know the basics it will be quite easy to start pulling data from the Retail3000 API but that doesn’t mean you should take the default examples and stick with them. Instead try and investigate what parameters are available and assess if this is something you can use to limit the amount of data you request. By default most requests will return all data but are you really interested in (for example) sales orders that are cancelled or already closed?

One parameter you’ll often encounter is the 'Cancelled' property. Items that are marked as cancelled will not be of interest in the majority of use cases. When retrieving logistical entities like purchase or sales orders, reservations, goods-in etc. you will probably be interested in records with a particular status. When retrieving discounts, most likely the active or future discounts are all you need while past discounts can be ignored.

These are only examples but a lot can be gained in performance and data communicated by constructing you requests in a smart, well thought manner.

Think about the data you need

In close relation to constructing smart requests you should also keep asking yourself what data you actually want to see. A good example of this is GetProducts which we’ll discuss in detail later on. GetProducts in it’s entirety will probably return somewhere around 180 columns, multiply that with the amount of rows returned and you will end up with a lot of data very quickly. To limit the amount of data you can use the FieldsToReturn property in the GetProducts query. This will be explained later on during the course.

Speaking in general terms there are often a few ways the same data can be obtained in different grades of efficiency that depends on the degree of detail you would like. For example, you could be interested in the email-address of a customer record and after performing GetCustomers choose to execute a GetEmailAddresses for each customer record returned but if you look closely the ViewCustomerInfo also contains a DefaultEmailAddress property. If the default is all you’re interested in than there’s no point in all the additional requests.

New<Entity>Info and New<Entity>Filter objects

Because the Nullable<T> class was not yet available during the early years of development of Retail3000 a design choice has been made at the time to incorporate custom Null values for certain data types.

Properties are initialized to those values in the constructors within Retail3000. Unfortunately those constructors are not called on the client end of a WebService endpoint however. This means that newly constructed objects are not provided with the propert null values which will lead to errors or unexpected behaviour.

In order to get an object that does contain the proper default values you should use the appropriate New<Entity>Info method for entities you are about to add and a New<Entity>Filter for filter entities you want to pass along to Get requests.

Field to return

With FieldsToReturn you can specify which columns or fields you would like to see returned. Possible values are:

-    ItemId
-    Products.*
-    *
-    Custom field list

If set to ItemId, only the ItemId column will be populated. The other properties will still be part of the objects returned but will all be empty or null.

Products.* will only return the values directly stored in the products 'Info' entity. As you might remember from earlier on there is a difference between a <Entity>Info and View<Entity>Info object and this in essence only returns the values present in <Entity>Info. As with ItemId, other properties will still be there but are all null or empty.

If a * is provided this means the entire View<Entity>Info object will be returned. This result is very rich in data but should be used only when requesting a fairly small set of products. The best option (performance wise) is the last one where you can specify a custom list of the properties you want or need. In this mode you need to provide a comma separated string containing the property names you would like to see returned. During certification, we will investigate the correct use of the fields to return.

View versus Info objects

All 'Get' methods which return multiple rows (so not a get for a specific {ItemId}) do return data in a 'View' object. This object derives from an original 'Info' object and does contain more fields as the source entity. For example, if a source entity contains a property CountryId, the View result might include fields like CountryCode and CountryDescrption. These fields have their sources in the Countries entity and are sent back to enrich the Info object. By doing so, the Get output can be display in lists for example so a country description column can be displayed. Because the 'View' derives from the original 'Info' object, it will always contain all the Info entity properties, but extended with extra 'View' properties.



Was this article helpful?

What's Next
Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.