MetaLocator Teams is an Enterprise add-on developed to provide role-based permissions in MetaLocator. It enables the creation of user groups (or "Teams") which can have limited access to:

  1. Locations

  2. Interfaces

  3. Categories

  4. Leads

Each user group can essentially work in isolation from other groups by limiting access to the classes of data above to only those accessible to their group.

This guide provides a plan to organize your data, unique keys and users so they can be uniformly managed in MetaLocator. In this guide we assume you have a basic working knowledge of MetaLocator, especially importing data.

Key questions we will ask, answer and explore in this guide include:

  • What are the groups that will be created to separate data and users?

  • Will data be centrally managed, or will individual team administrators and managers be importing data from their own data sources?

  • What will the data structure look like for each team?

  • How will a location be uniquely identified across all data sources? Is there an existing unique ID, like a Salesforce ID, or CRM ID that can be used across all data sets?

MetaLocator Teams was developed to ensure large organizations with multiple divisions and users can operate independently within their groups without disturbing others. In this guide, we'll explore a hypothetical organization that has multiple divisions. These divisions will have their own locations and lists of products and services. We'll also explore the scenario where one location may exist in multiple divisions. We see this often when a dealer might provide both residential and commercial services.

Creating Groups

We'll begin by creating a series of groups to represent each division. We'll use Retail, International and Commercial for our groups in this example. Those divisions represent the applications of MetaLocator in our hypothetical organization. They also represent the teams of people in our hypothetical organization that will be managing the solution long term.

Other common examples of user groups include:

  • By Industry - Agriculture, Coatings, Pumps

  • By Function - Marketing, Production, Manufacturing

  • By Audience - Residential, Commercial, Industrial

  • By Brand - One Group for each Brand, e.g. Widgets, Wonkets, Plinkets

In MetaLocator Teams, your chosen groups should align with the isolated use cases that require siloed data, interfaces, categories and leads. Those use cases are often aligned with the actual users who manage the solution, but not in all cases. We'll get into the Users side of things more later in this guide.

To create the User Groups, click User Manager, then User Groups. Click New and provide a name for each group.

Our groups are now shown as below:

Importing Data

For now, we are going to import our data from 3 separate CSV files, one for each group. Later in this guide we will consider how to manage multiple data sources.

When importing data, we can control the Group of the data we're importing by using any of the following methods:

  • Importing data as a Group Administrator - Data is automatically owned by the user group or groups assigned to the Group Administrator.

  • Specifying a UserGroupId column in the data. The value for the UserGroupId column for a given row can contain a CSV of group names, as in Retail,International. The resulting row will be placed in the specified groups. The row can also be repeated in the data, where the only change is the value for UserGroupId if providing the CSV formatted list is a barrier.

  • Choosing the Context Group on the Import Options screen (shown below) - When importing data, choose Update Existing and Insert New, then choose the appropriate Context Group. That choice controls the group role used during import so any new records will be created within the chosen group.

  • Creating an import mapping rule of type UserGroupId - Based on a field value and field name present in the data, the user group can be derived based on a mapping rule that matches the criteria. This is convenient when a field value in the data should drive the group. E.g. If the "division" column contains "retailer" put the record in the Retail group.

At this stage in the guide, we will use the third option above, since it is a great way to quickly populate data into multiple groups as a single user without having to do any other setup work, or login and logout of various users.

Before proceeding, ensure your account has an external key field configured. If you're following this tutorial along exactly, our key field is named StoreNo.

Each file is available for download here:

  1. retail.csv

  2. commercial.csv

  3. industrial.csv

We will import each using the following process:

  1. Login as an Administrator user

  2. Click Locations, Import and choose CSV

  3. Map the Products and Services columns to Category1, Category2 and so on.

  4. Map the StoreNo column to an existing External Key column of the same name.

  5. Choose "Update Existing and Insert New" on the Import Options screen.

  6. Select the corresponding Context Group, (e.g. Retail for retail.csv)

  7. Check the boxes for Delete Unmatched Records and Delete Unmatched Categories as shown below:

These option choices ensure the following:

  1. Records are owned by the correct group

  2. Records and categories are bi-directionally synchronized with the incoming data set, meaning all updates, inserts and deletes are handled during the import.

After import, under Locations > All Records, the user groups will be populated accordingly.

The categories are also automatically created within the target user group.

At this stage, we are free to create Interfaces within a given group, and those Interfaces will be limited to the locations and categories accessible to that group. Also, any leads generated will be automatically owned by the group. As a Group Administrator, any Interfaces created will automatically be limited to the Groups available to the Group Administrator. As a full Administrator, you can set the Interface groups by selecting the checkbox next to the Interface name, and choosing Set Interface Groups from the ellipsis menu in the toolbar under Interfaces.

Multiple Data Sources and External Keys

It is critical for MetaLocator to be able to uniquely identify a record during import in order to support bi-directional synchronization. The External Key field is the unique identifier that enables this essential behavior. It provides for the safe identification and deletion of records during bulk import operations. All teams share the same, single external key column.

The external key column, called "StoreNo" in our examples so far, must be unique across all rows in MetaLocator. In a Teams setup it is common to have data coming from different systems or sources. In practice, we see the following scenarios:

  1. There is a centralized system that is providing data, and the unique key is assigned by that system. For example, if all data across all teams is managed in SalesForce, the key is usually the AccountID, and is automatically unique and provided by SalesForce. Most CRMs have a similar key field. This is the ideal scenario.

  2. There is no centralized system and multiple, disparate data sources. Keys are available in some data sources, but not all. For example, one set of data is in Google Sheets with no key field and another set is imported from a CSV file that happens to include a key field from the system it was exported from.

In scenario 1 above, the key is provided in the data, and should be mapped to the single external key column in MetaLocator during the import.

Scenario 1 also allows for one location to appear in multiple groups and data sources. E.g. a dealer location that interacts with both the Retail and Commercial divisions should not require duplicating the dealer's record in MetaLocator. In this scenario, locations in multiple groups could appear as rows in one or more data sources, or as a single row with multiple groups specified in a UserGroupId column as described above. Since the External Key value is the same, only one record is created. When that row is imported again from another group, the record is not duplicated, it is simply added to the new group. Similarly, if the record is in multiple groups, say Commercial and Retail, and removed from the data source of Commercial, the record is not deleted, it is simply removed from the Commercial group. This effectively deletes it from Commercial and while it remains in Retail.

In Scenario 1, if the active user is importing a record which includes an external key value already present in the system, but not in the groups available to the active user, it will not be updated or added to the active user's groups because the active user lacks permission to edit the existing record. The importing user must be an Administrator or Group Administrator with access to the appropriate groups in order update the existing record and add it to the target group.

Scenario 1 is ideal because it minimizes duplication of location data, which:

  1. Creates a canonical, authoritative location record

  2. Reduces per-location costs for geocoding, data storage

  3. Reduces the chance for inconsistency

  4. Reduces duplication of efforts for data updates

In scenario 2, we have a few challenges to consider:

  1. Key values must be created where they do not exist

  2. Key values could theoretically overlap (E.g. Sam assigns StoreNo 1 to a record in Retail and Jill does the same thing in Commercial)

To address these challenges, we can introduce a new field in MetaLocator, and a corresponding column in our data for use in a Composite External Key. The column can be called "segment", or "division" and may contain similar values to our Group choices as shown below:

This allows users to assign any key value to StoreNo so long as it is unique within their division.

The StoreNo field becomes a regular text or integer field and a new external key field is created referencing storeno and division as its components.

The new UniqueID column is not managed by your users or included in the data import. It is dynamically assembled by MetaLocator during import based on the values of division and storeno and visible as read-only. It is unique across all divisions and allows users to create and assign unique values to storeno according to whatever method they deem fit based on their data source. For simple spreadsheets this could be an auto-incrementing field, so long as it is unique by row and consistent, meaning the numbers, once assigned to a location are not re-assigned later arbitrarily.

It is highly recommended to make both division and storeno required fields in MetaLocator, by setting row-level validation rules for each field. This ensures that any rows without both division and storeno populated will be rejected.

Background Data Importing

Properly establishing recurring background data import jobs is an important step along the way to a successful, sustainable deployment of MetaLocator. These jobs can regularly connect to your external data sources and pull in data. Jobs can be established by a central Administrator user, or they can be created by individual Group Administrators.

Establishing jobs as a single Administrator user provides for centralized management of data importing and ETL tasks. This is commonly a centralized consideration in organizational structure, e.g. it is handled by the IT department. However, when there are multiple data sources and the credentials for those sources are not centrally managed, it can be a difficult task for a one user. One common example is where users manage data in Google Sheets, and each sheet is owned by a different user. In order to centrally manage that data, the sheets should be shared to the Google account of an administrator who then establishes the import jobs.

Centralized Management - One Import Job

If a single job imports all data by a single Administrator, then the Group of the row must be established by the data in a UserGroupId column or by an import mapping rule as described above.

Centralized Management - Multiple Import Jobs

If one Administrator is creating multiple background jobs, in addition to the data-driven options described above, the Administrator can configure the job to run under the Group Context of the target group. This is configured when performing the import and choosing the appropriate Group Context from the Import Options screen. The Administrator can also choose a user to run the job as, which is controlled by the user_id parameter of the job settings. The value should be set to the user id of the intended user, typically a Group Administrator in this context.

Distributed Management - Multiple Import Jobs

If each Group Administrator creates their import jobs independently, the Group Administrator connects to the data source and configures the import job as needed.

One downside to delegating the creation of import jobs to Group Administrators is that the schedule of the data import can become scattered if not well maintained. For example, Retail data gets imported every 3 hours and Commercial data gets imported every day at noon. Sometimes that is desirable and appropriate, however; having a consistent schedule across all data sources can be easier to manage. Regardless of who creates the job, the job settings, including the schedule can be altered and aligned by an Administrator.

External Keys in Distributed Management

It is critical that the External Key strategy is commonly understood and validated as described above if data importing is to be successfully delegated and managed by individual Group Administrators. When a new Group Administrator is created, they receive an automated invitation which includes a downloadable import template. The template will include columns for the custom fields established in the account:

The template and email does not include information regarding how to populate key columns, you will want to communicate that information in the welcome message option when creating the account, or however you chose to communicate with your team members. Regardless of what these users do during an import, if the validation is established on the key field(s) and their access is limited to their group, the impact of any bulk actions will be similarly limited.

Creating Users

MetaLocator Teams introduces the Group Administrator and Group Manager roles. These users are limited to accessing the data, categories, leads and Interfaces of one or more groups.

In a distributed management setup, each group will commonly have a Group Administrator responsible for that division's MetaLocator applications. To create Group Administrators, see this guide.

Users within Groups

The most common implementation in MetaLocator teams is to have one Group Administrator within each group that can create Interfaces and Categories for the group. Then, within the group there are Managers who can import and update data. One additional consideration is country managers. These are managers that are limited by group and a country. This is useful when team members manage data for a specific country and their contributions should not influence outside their country.

Above the Group Administrators you can position an Administrator who has access to all groups. This is typically a centralized IT role, like a database administrator or IT manager that is commonly assigned tasks that can influence the entire Enterprise.

Access to Fields

Some groups might have access to certain fields that only relate to their area. Access to a field quite simply controls whether it is available to edit when importing data or when manually editing data. This conveniently limits the size of the form to only fields that matter to that user when editing a location. Similarly, when importing and exporting the columns are limited to those which matter to that group.

Address fields, like city, state and postal code are normally shared across groups, however that sharing must be explicitly set in the field settings for each field. Certain system fields like latitude and longitude, published state, name and type are accessible to all groups.

Did this answer your question?