Skip to content

Repository Connector Options Reference (SVN, Git, Azure DevOps, etc.)

To create a project, Teamscale needs to be connected to your version control system. To this end, it offers several different repository connectors. We describe the options for each repository connector.

Shared Connector Options

Setting up the other connectors to source code repositories and issue trackers is very similar to the Git connector setup. Many connectors share general options such as a polling interval and therefore, we only describe the options (if there are any) for first connector that uses it. If there are no additional options in a connector, we don't document the connector separately.

Common and Git-specific Connector Options

Default branch nameThe name of the branch to import.
AccountAccount credentials that Teamscale will use to access the repository. Clicking on the button opens a credentials creation dialog.
  • The Account Name setting in the dialog is Teamscale-internal (does not need to match anything in git).
  • When using a file URL (file:///) in the account, make sure that the corresponding repository is bare, i.e. has been cloned with the --bare option.
Path suffixShould Teamscale only analyze a subdirectory of the repository?
Repository identifierTeamscale will use this identifier to distinguish different repositories in the same project. (Teamscale-internal only)
Included file namesOne or more Ant Patterns separated by comma or newline to restrict the analysis on a subset of files. A file is included in the analysis if any of the supplied patterns match. Typical examples:
  • include all Java and JS files: **.java, **.js
  • include all Java and JS files in a specific folder: */folder/*.java, */folder/*.js
Excluded file namesOne or more Ant Patterns separated by comma or newline to exclude from the analysis. A file will be excluded if any of the exclude patterns match. Typical examples:
  • exclude a certain folder: */folder/*
  • exclude files that end with Test or test: */[Tt]est.cs
  • exclude a top-level folder: folder/**
Include submodulesWhether to also analyze code in submodules.
Submodule recursion depthThe maximum depths of nested submodules that will be analyzed. (Has no effect if Include submodules is not selected.)
SSH Private Key IDThe ID of an SSH private key stored in Teamscale which will be used when connecting to a Git Account over SSH.
You can manage all your Git private keys on the Settings page of the Admin perspective, in the Git section. When using a Git private key, the Account used by the connector has to be configured as follows: The URL has to contain the user name (e.g., ssh://username@gitserver/path/to/repo.git), the password entered must be the password for the private key and the username may be left empty.
Enable branch analysisThis checkbox activates our branch support feature. If you select it, Teamscale will analyze all branches of the repository. If branch analysis is enabled, the Default branch name option specifies the main branch (usually main or master)
Included branchesExplicitly control which branches will be analyzed. (hidden if Enable branch analysis is not selected.)
Excluded branchesExplicitly state which branches to exclude. Note that Teamscale will create so called anonymous branches for all historic branches whose name could not be reconstructed. By default these are filtered using the exclude pattern _anon.*
Start revisionThe first revision that will be analyzed. You can enter a date (which selects the first revision after the date) or a revision hash.
Content excludeA comma-separated list of regular expressions. Teamscale will exclude all files that match one of these filters.
Polling intervalHow often should we query the repository and update our findings? (Unit is seconds)
Test-code path patternAnt patterns describing the file paths to be considered test code. Files that match one of the given patterns are considered to be test code unless also matched by a pattern in Test-code path exclude pattern. Patterns can be separated by comma or newline. Files matched as test code will be excluded from test gap tree maps.
Test-code path exclude patternAnt patterns describing the file paths that should be excluded from the test-code files matched by option Test-code path pattern. Files that match one of the given patterns will not be considered test code. Patterns can be separated by comma or newline.
Branch transformation (expert option)Regex transformations that are applied to the branch names of the repository. The mapping should be one-to-one. This can be used to unify branch names (e.g. master vs. main) within a Teamscale project using multiple repository connectors. Example: 'master -> main'.

Team Foundation Server (TFS)-specific Connector Options

Path suffixA common subdirectory on the Azure DevOps Server appended to the account URI. Leave empty if the whole directory structure is relevant. This is the root directory when looking for branches if branching is enabled.
Branch path suffixThe sub path (relative to the branches) that should be analyzed. Leave empty to analyze all code within each branch.
Branch lookup pathsThe paths in the repository separated by commas where Teamscale will search for branches within the root directory defined by the Path suffix and account URI. If set, Teamscale only looks for branches on the top level of each branch lookup path. If empty only the root directory is searched for branches on the top level.
Included/Excluded branchesIncluded and excluded branches must contain the full path to the branch relative to the root directory. This means the Branch lookup paths appended with the contained branch
Non-branched modeActivates the non-branched mode for TFS. If it is enabled, Teamscale ignores the default branch and treats everything found in the root directory of the repository as content to analyse. If the option is disabled, Teamscale interprets the directories found at the Branch lookup paths as branches. It is not possible to enable both the branch analysis and the non-branched mode.

Subversion (SVN)-specific Connector Options

Branches directoryA path in which to look for branches. Defaults to branches.

File System-specific Connector Options

Input directoryThe path of the source-code directory

Vote-Supporting Connector Options

Teamscale can cast merge request votes on platforms that support it. The connectors that implement this behavior are:

  • Azure DevOps Git
  • Bitbucket Cloud
  • Bitbucket Server
  • Gerrit
  • GitHub
  • GitLab
  • SCM-Manager

The common options for these connectors are:

Enable Findings IntegrationPrerequisite for all findings related merge request actions. Opens up options like Findings badges. This option does nothing on its own.
Enable Findings BadgeIncludes Teamscale findings badge in voting. This badge is usually added to the merge request's description and shows the findings churn of the merge request, i.e. findings that were added, removed or are in changed code of this merge request.
Enable Voting for FindingsWhether Teamscale should vote on merge requests. Depending on the platform and more specifically the repositories' configuration, this can block the merging of a merge request.
Enable Detailed Line Comments For FindingsWhen enabled, a Teamscale vote will carry a detailed comment for each generated finding that is annotated to the relevant line in the reviewed file.
Enable Test Gap Integration for Merge RequestsIncludes Teamscale test gap badge in voting. This badge is usually added to the merge requests description.
Enable Commit Alerts for Merge RequestsWhen enabled, a Teamscale vote will carry a detailed comment for each commit alert that is annotated to the relevant file.
Aggregate Findings in Single CommentWhen enabled, Teamscale will aggregate all related findings in a commit into a single comment before annotating it to the reviewed file. The aggregated findings could be multiple identical findings or different findings sharing the same type (e.g. structure findings). This aims to reduce redundancy and not to overcrowd the merge request with repetitive comments. Example: if a commit introduced 20 "Interface comment missing" findings, instead of adding 20 individual comments to the merge request, Teamscale will only add a single comment "This file contains 20 instances of: Interface comment missing".
Ignore Yellow Findings For Votes (expert option)When enabled, only red findings will cause Teamscale to cast a negative vote on merge requests. Note that this option has a different behavior for the Gerrit connector.
Ignore Yellow Findings For Comments (expert option)When enabled, Teamscale will only add comments for red findings to the merge request.
Partitions required for Voting (expert option)Comma separated list of external upload partitions that are expected to be present after a code commit to actually vote. This is used to prevent voting too early when further external uploads (e.g., test coverage reports) are expected.
Detailed Line Comments LimitMaximum allowed number of detailed line comments when annotating merge requests, in case the "Enable Detailed Line Comments For Findings" option is enabled. If the number of added findings exceeds the limit, no line comments will be added to the merge request and a warning message is displayed. Please note that some platforms enforce limits in their API.
Vote Include/Exclude Patterns (expert option)The include/exclude-filename patterns filter findings from both the findings badge (if the badge is enabled) and the inline comments (if they are enabled). Teamscale will still add a findings badge, even if none of the changed files of a merge request match the filter.

Bitbucket Server

The Bitbucket Server connector can access Git repositories of Bitbucket Server instances and can additionally vote on pull requests.

TIP

For a visualization of how Teamscale votes in Bitbucket Server, see our how-to guide.

Bitbucket Server Options

Enable Voting for FindingsThis option causes Teamscale vote on commits as a build status and can potentially block the merging of pull requests, depending on the repositories' configuration.
Enable pull request reviewVotes by submitting a pull request review.

Gerrit

The Gerrit connector can access Gerrit-managed Git repositories, and can additionally vote on changes.

Gerrit Connector Options

Project NameThe name of the Gerrit project to connect with.
Ignore Yellow Findings For Votes (expert option)When this option is enabled, the voting will be performed according to the following rules:
  • Vote +1 only if there are no findings.
  • Vote 0 if there are only yellow findings.
  • Vote -1 if there are any red findings.

Subversion (SVN) Branch Analysis

Subversion by itself does not support real branches, but rather uses its cheap copy operation and folder naming conventions for managing logical branches. For Teamscale to recognize a folder as a branch, it must either by a top-level folder trunk or a folder below the branches directory configured for the repository (defaults to branches). Then enable branch support and configure the URL of the SVN connector to point to the project folder, i.e., the one containing trunk and the configured branches directory. If the SVN repository in question is using the standard SVN layout, then you can simply use the branches directory option's default.

Note that Teamscale assumes all folders directly within the branches directory to be actual branches. Using further folder hierarchies to organize your branches is currently not supported. More specifically, a branch name may never contain a slash. Also, as Teamscale analyzes your history, even branches that have been deleted from the branches folder will be found and potentially be analyzed, as they are part of the recorded history. If you want to prevent this, either set a start date after the time the branch was deleted or exclude the corresponding branches using the branch exclude patterns.

Subversion Authentication Problems

In case you are running into the problem that the SVN connector validates successfully, but the SVNChangeRetriever then runs into an SVNAuthenticationException with a 403 Forbidden message, you can try to add the following to the entry JVM_EXTRA_ARGS in the file $TEAMSCALE_HOME/config/jvm.properties:

properties
-Dcom.teamscale.enable-svn-thorough-auth-check-support=true

If it is enabled, Teamscale will do some additional authentication checks, to find the folder closest to the repository root it can access. Because this is slower than the default, it is disabled by default.

Azure DevOps Team Foundation Version Control Credentials (formerly called Team Foundation Server (TFS))

When connecting to Azure DevOps TFVC via a repository connector, you may use either the account credentials of a domain user or a personal access token (if these are enabled in your Azure DevOps).

When configuring an account credential for an access token, enter the Azure DevOps access token in the Access Token field. The Username is optional in this case. By convention, you may use the name of the access token.

Azure DevOps Boards (formerly called Team Foundation Server Work Items)

When connecting to a Azure DevOps Server via an issue connector, please verify whether the Basic Authentication option is enabled in your IIS or not. Depending on this option, you may need to use different credentials than you used for your repository connector when connecting to the Azure DevOps to retrieve work items.

With Basic Authentication enabled:

You must connect with the credentials (username and password) of a Azure DevOps user account. Connecting with personal access tokens will not work. This may either be a domain user or a Azure DevOps user with the necessary rights to read work items.

With Basic Authentication disabled:

You must connect with a personal access token. Connecting with a user account (username + password) will not work. Put the personal access token in the Password field in the Teamscale UI. Configuring a Username in Teamscale is optional in this case. By convention, you may use the name of the TFS access token.

Artifactory Connector Options

Artifactory is a tool hosting a repository where build artifacts can be stored. Examples of such artifacts are generated code or findings generated by some external tool. It is good practice to store these in a versioned manner with a reference to the revision used to generate the given artifacts. This can be easily achieved by encoding the revision in the path of the artifact, e.g., /app/component/trunk/rev1234/artifact.zip.

Teamscale can extract such metadata from the artifact path and use it to link revisions in Artifactory to revisions in source-code repositories.

Artifactory Connector Options

To extract metadata from paths, Teamscale offers a variety of extraction options that can be configured in the Artifactory Repository Connector. These options are defined by regular expressions with exactly one capturing group. The regular expression will be applied to the complete path of the imported artifact, and the content of the capturing group will be interpreted as the data to be extracted.

If the Artifactory file layout is not yet fixed, it may prove helpful to keep ease of extraction in mind when deciding on a structure. For example, paths should include the names of the fields Teamscale should extract. This allows simpler regular expressions than when having to derive the fields from the repository layout, especially if that layout is highly diverse across different projects.

Consequently, the path /app/component/branch_trunk/rev_1234/artifact.zip is easier to parse than the path /app/component/trunk/1234/artifact.zip. However, Teamscale is flexible enough to accommodate all repository structures, even when the layout is already established.

Teamscale assumes that uploads to Artifactory are typically triggered by VCS commits. Some configuration options only make sense in this light, e.g., "Timestamp Interpretation". This option allows relating an Artifactory upload to some previous repository commit.

Artifactory Connector Configuration

RepositoryThe name of the Artifactory repository which contains the relevant data
Name Search PatternThe pattern (Artifactory Query Language) on the simple file name used for finding archive files. Supported wildcards are * and ?. Separate multiple patterns with comma or newline.
Example: *.zip
Path Search PatternThe pattern (Artifactory Query Language) on the full path used for finding archive files. Supported wildcards are * and ?. Separate multiple patterns with comma or newline.
Example: component/subcomponent/*
Branch Extraction PatternPattern used to extract the branch name from the full path. A regex pattern with exactly one capturing group. The group is used to find the branch name from the archive's path.
Example: component/(trunk|branch)/rev
Timestamp Extraction PatternPattern used to extract the timestamp or revision from the full path. A regex pattern with exactly one capturing group. The group is used to find the timestamp name from the archive's path. If this is empty, the creation date from Artifactory will be used.
Example: /rev(\d+)/
Timestamp Interpretation Describes how to interpret the value from timestamp extraction. Possible values are 'date:pattern', where pattern is a Java time format descriptor, 'timestamp:seconds' and 'timestamp:millis' for unix timestamps in seconds or milliseconds, 'connector:repository-identifier' to interpret the value as a revision of a connector in the same project, 'svn:account-name' to interpret the value as a Subversion revision, and 'git:account-name' to interpret the value as a git commit hash for a repository using the 'File System' connector.
Prefix Extraction PatternPattern used to extract an additional prefix that is placed before the name of files extracted from the ZIP files. A regex pattern with exactly one capturing group. The group is used to find the prefix from the archive's path. If this is empty, no prefix will be used.
Example: variant/(var1|var2)/componentwill prepend either var1 or var2 to the path used in Teamscale
Ignore extraction failures (expert option) When checked, Teamscale will try to ignore errors during timestamp extraction according to the given extraction pattern. This can be useful when many non-conforming paths are expected to be analyzed alongside the conforming paths. When this is unchecked, such errors will stop the import until they are corrected (to avoid importing inconsistent data). When this option is checked, the non-conforming paths will be skipped but all conforming paths will still be imported.
Delete Partitions Without New Uploads (expert option) Whether to discard data in partitions that have not been updated in a commit in an upload to Artifactory, instead of keeping it. (off = keep data). Turning this on is useful in scenarios, where you expect to get all relevant data with every single commit (e.g. when using only fully automated tests that are run with each commit). Turning it off is useful in the inverse scenario, e.g. when using a lot of irregular manual uploads, such as when doing manual testing for test gap analysis.

S3 Connector

The S3 connector enables Teamscale to connect to Amazon S3 or any object storage server with an API compatible with Amazon S3 cloud storage (e.g. MinIO). The connector then fetches the data from the object storage server, whether it's generated code or data generated by some external tool (e.g. findings, code coverage, etc). It is recommended to store the objects in archives whose keys include references to the revision used to generate the given artifacts, as well as the branch to which they belong, e.g., app/component/branch_name/rev_1234/artifact.zip

Teamscale can extract such metadata from the object key and use it to link revisions from the object storage server to revisions in source-code repositories.

S3 Connector Account Credentials

When connecting to Amazon S3 or any S3-compatible server via the S3 connector, you must use the account credentials of a user's service account.

Required User Permissions

The user, whose service account will be used, must have the following action permissions:

  • s3:GetObject
  • s3:ListBucket

Below is a sample for an IAM policy specifically created for the Teamscale technical user with only the required permissions.

json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",  
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

Account Credentials

When using http(s)-based path-style links, the used account's fields should be filled out as follows:

  • URI: The base URL of the object storage server (e.g., http(s)://<hostname>[:port]/<bucket>[/key]).
  • Username: The service account's access key.
  • Password: The service account's secret key.

When using http(s)-based virtual-host-style links, the used account's fields should be filled out as follows:

  • URI: The base URL of the object storage server (e.g., http(s)://<bucket>.<hostname>[:port][/key]).
  • Username: The service account's access key.
  • Password: The service account's secret key.

When using s3-style links, the used account's fields should be filled out as follows:

  • URI: The base URL of the object storage server (e.g., s3://<bucket>[/key]).

Connector bucket > URI bucket

The <bucket> given by a URI in s3-style will be overwritten by the bucket name given in the S3 Connector Configuration.

The access and secret keys are taken from the host machine or Teamscale instance after successful installation and configuration of the AWS CLI. The Username and Password will be ignored and can therefore be left empty.

Bucket name and host name

Please keep in mind that the bucket name should not be identical to the beginning of the host name. If so, a path-style URI will be misinterpreted as a virtual-host style URI. This will lead to a wrong bucket/key.

Example:

  • http:///// is actually http://.///
    • e.g. http://.///.
  • If = , the connector interprets the URI as http:///.//
    • e.g. http://..// will be interpreted, although it actually is http:////.

The parser will misread the key and host.

Credentials Process

An AWS credentials process may be used instead of the account's username and password. To enable this functionality, you have to set the command via the JVM property -Dcom.teamscale.s3.credentials-process-command. The command should correspond to an executable file accessible by Teamscale that will provide the credentials in the correct format when executed. You may optionally specify multiple environment variables in the admin settings under the "AWS Integration" section. They will be added to the environment of the command. For example, if the command is credentials_process, it will be executed as VAR1=VAL1 VAR2=VAL2 ... credentials_process, where VAR1, VAR2, etc., are the environment variables and their corresponding values. Finally, the option "Use Credentials Process" must be enabled in the connector for the credentials process to be used.

S3 Connector Options

To extract metadata from objects, Teamscale offers a variety of extraction options that can be configured in the S3 Repository Connector. These options are defined by regular expressions with exactly one capturing group and/or ant patterns. The regular expression will be applied to the complete key of the imported object, and the content of the capturing group will be interpreted as the data to be extracted.

If the bucket's layout is not yet fixed, it may prove helpful to keep ease of extraction in mind when deciding on a structure. For example, object keys should include the names of the fields Teamscale should extract. This allows simpler regular expressions than when having to derive the fields from the bucket's layout, especially if that layout is highly diverse across different projects.

Consequently, the path /app/component/branch_trunk/rev_1234/artifact.zip is easier to parse than the path /app/component/trunk/1234/artifact.zip. However, Teamscale is flexible enough to accommodate all bucket structures, even when the layout is already established.

Teamscale assumes that uploads to the S3 object storage server are typically triggered by VCS commits. Some configuration options only make sense in this light, e.g., "Timestamp Interpretation". This option allows relating an S3 object to some previous repository commit.

S3 Connector Configuration

BucketThe name of the bucket which contains the relevant objects and data. It also replaces the bucket name in the s3-style URI given in the account.
Key PrefixesStrings indicating the prefixes to be used when fetching keys from the bucket. Separate multiple strings with a comma.
Included Key PatternsAnt patterns describing the keys to be used for finding archive files from the bucket. Supported wildcards are * and ?. Separate multiple patterns with comma or newline.
Example: *.zip
Excluded Key PatternsAnt patterns describing the keys to be excluded from the bucket. Keys will be matched case-insensitively.
Use Credentials ProcessIf enabled, the configured credentials process will be used for authentication instead of the account's username and password fields.
Branch Extraction PatternPattern used to extract the branch name from the full path. A regex pattern with exactly one capturing group. The group is used to find the branch name from the archive's path.
Example: component/(trunk|branch)/rev
Timestamp Extraction PatternPattern used to extract the timestamp or revision from the full path. A regex pattern with exactly one capturing group. The group is used to find the timestamp name from the archive's path. If this is empty, the creation date from Artifactory will be used.
Example: /rev(\d+)/
Timestamp Interpretation Describes how to interpret the value from timestamp extraction. Possible values are 'date:pattern', where pattern is a Java time format descriptor, 'timestamp:seconds' and 'timestamp:millis' for unix timestamps in seconds or milliseconds, 'connector:repository-identifier' to interpret the value as a revision of a connector in the same project, 'svn:account-name' to interpret the value as a Subversion revision, and 'git:account-name' to interpret the value as a git commit hash for a repository using the 'File System' connector.
Prefix Extraction PatternPattern used to extract an additional prefix that is placed before the name of files extracted from the ZIP files. A regex pattern with exactly one capturing group. The group is used to find the prefix from the archive's path. If this is empty, no prefix will be used.
Example: variant/(var1|var2)/componentwill prepend either var1 or var2 to the path used in Teamscale
Ignore extraction failures (expert option) When checked, Teamscale will try to ignore errors during timestamp extraction according to the given extraction pattern. This can be useful when many non-conforming paths are expected to be analyzed alongside the conforming paths. When this is unchecked, such errors will stop the import until they are corrected (to avoid importing inconsistent data). When this option is checked, the non-conforming paths will be skipped but all conforming paths will still be imported.
Delete Partitions Without New Uploads (expert option) Whether to discard data in partitions that have not been updated in a commit in an upload to Artifactory, instead of keeping it. (off = keep data). Turning this on is useful in scenarios, where you expect to get all relevant data with every single commit (e.g. when using only fully automated tests that are run with each commit). Turning it off is useful in the inverse scenario, e.g. when using a lot of irregular manual uploads, such as when doing manual testing for test gap analysis.

Issue Tracker Connector Options

Options for all Issue Tracker Connectors

Issue Connector identifierTeamscale will use this identifier to distinguish different issue connectors in the same project. (Teamscale-internal only)
Issue ID pattern in commit messagesA regular expression that matches the issue ID inside commit messages. Must contain at least one capturing group. The first capturing group of the regular expression must match the entire issue ID as it is used by the issue tracker. For example, for JIRA a valid pattern might be (JIRA-\d+) . If the regular expression can match alternate spellings that the issue tracker does not consider valid issue IDs, then these spellings must be normalized using an Issue ID transformation (see below).
Issue ID pattern in branch namesA regular expression that matches the issue ID inside branch names. Must contain at least one capturing group. The first capturing group of the regular expression must match the entire issue ID as it is used by the issue tracker. For example, for JIRA a valid pattern might be (jira_\d+) . If the regular expression can match alternate spellings that the issue tracker does not consider valid issue IDs, then these spellings must be normalized using an Issue ID transformation (see below).
ProjectsA comma-separated list of project names from the issue tracker. Only issues from these projects will be imported
Issue ID transformation in commit messages (expert option)A transformation expression in the form original expression -> transformed expression which can translate the issue references in commit messages to actual issue IDs in the issue tracker. For example, JIRA-(\d+) -> MY_PROJECT_KEY-$1 (using the Issue ID pattern in commit messages (JIRA-\d+)).
Important: original expression is applied to matches of the capturing groups in the Issue ID pattern in commit messages option (not the entire commit message).
In transformed expression, capturing groups from original expression can be referenced by $1, $2, ...
Issue id transformation in branch names (expert option)A transformation expression in the form original expression -> transformed expression which can translate the issue references in branch names to actual issue IDs in the issue tracker. For example, jira_(\d+) -> MY_PROJECT_KEY-$1 (using the Issue ID pattern in branch names (jira_\d+)).
Important: original expression is applied to matches of the capturing groups in the Issue ID pattern in branch names option (not the entire commit message).
In transformed expression, capturing groups from original expression can be referenced by $1, $2, ...

Jira-specific Issue Tracker Connector Options

Add to Jira issues Allows to configure active (push) update of Jira issues with analysis results from Teamscale. The Jira issue update configuration menu is shown below. An example of a Jira issue field enhanced with Teamscale-generated data can be seen in this guide.
Remove deleted issuesWhether issues deleted in Jira should also be deleted in Teamscale. If enabled, the first poll per day will also check for issues deleted in Jira and remove them from Teamscale. In order to detect deleted issues, every issue has to be fetched from the Jira server. This can be a rather expensive operation, even when no issues are deleted.

Jira Server Issue Update Configuration Menu

TFS Worker Item-specific Issue Tracker Connector Options

AreasA list of full area paths to extract work items from.
Include Sub-AreasIf checked, issues from sub-areas are retrieved as well, otherwise just issues exactly matching one of the area paths are retrieved.

RTC/Jazz-specific Issue Tracker Connector Options

Field filter expressionAn expression used to filter issues that should be imported. Must conform to RTC/Jazz’s Reportable REST API specification.
Item types Allows to select the work item types to import from RTC/Jazz. Leave empty for all item types.

Jira Server Issue Update Option Details

The Add to Jira issues (expert option) allows to configure the type of Teamscale analysis results appended to Jira issues as well as the destination field(s) of Teamscale-generated data in matched issues. Following analysis results can be appended to matched Jira issues:

  • Findings balance suboption allows to update Jira issue(s) with finding badges that contain the counts of added/removed/changed code findings introduced by the respective issue(s).

  • Test gap suboption allows to update Jira issue(s) with the test gap ratio of the respective issue(s). The test gap ratio also includes changes from child issues in case the Jira issue acts as a parent for other issues.

Both suboptions are linked to the Issue perspective where all issue-related information can be viewed.

Additionally, it is possible to configure the field(s) of Jira issues that Teamscale-generated data should be appended to:

  • Issue description suboption allows to append Teamscale-generated data to issue description.

  • Custom field(s) suboption allows to write Teamscale-generated data to specified custom fields. This will overwrite any existing data in the field. The custom fields have to be created by Jira Administrator and be of Text Field (multi-line) type.

Requirements Management Tool Connector Options

Options for all Requirements Management Tool Connectors

Projects / Project ID Name of the project(s) in the requirements management tool to import spec items from.
Requirements Connector identifier Teamscale will use this identifier to distinguish different requirements management tool connectors in the same project. (Teamscale-internal only)
Specification item ID pattern A regular expression that matches the spec item ID inside source code entity comments. Must contain at least one capturing group. The first capturing group of the regular expression must match the entire spec item ID as it is used by the requirements management tool. For example, for Polarion a valid pattern might be (DP-\d+).
Custom fields Allows to specify the custom fields to import with the spec items. Custom fields have to be selected separately, as they are non-standard fields that are configured for each project individually.

Polarion-specific Requirements Management Tool Connector Options

Specification space name Allows to select the space to import the documents from.
Document (module) ID(s) Allows to select the documents to import. The documents can be specified as a comma-separated list of document IDs and/or document ID regular expressions.
Included work item types Allows to select the work item types to import from Polarion (requirements, test cases, etc.) and their respective type abbreviations for use in Teamscale UI.
Included work item link roles Allows to select the work item links to import (duplicates, is implemented by, etc.).

RTC/Jazz-specific Requirements Management Tool Connector Options

Field filter expression An expression used to filter work items that should be imported. Must conform to RTC/Jazz’s Reportable REST API specification.
Item types Allows to select the work item types to import from RTC/Jazz and their respective type abbreviations for use in the Teamscale UI.