How to Upload External Analysis Results to Teamscale
It is possible to upload external analysis results to Teamscale, either via the REST API or via repository connectors. External analysis results include test coverage reports, test execution results, custom findings, custom metrics and more.
Upload via Web UI
For testing and debugging purposes it can sometimes be helpful to manually upload reports via the Web UI.
Upload via Command Linerecommended
With our command line utility teamscale-upload
for Windows and Linux you can easily upload external analysis reports to Teamscale.
If you want to upload a report file from your CI pipeline, you can run:
teamscale-upload --server http://<TEAMSCALE_URL> --project <PROJECT> --user <USER> --accesskey <ACCESSKEY> --partition <PARTITION> --format jacoco **/target/site/jacoco/**.xml
Every upload of external results needs to be linked to a commit in order to understand which exact versions were used during testing. This command will automatically detect the version control commit to which the report belongs by querying your CI system (if it is supported, e.g. Jenkins, Azure DevOps, ...) or any Git or SVN checkout in the current working directory. If you want to specify the commit manually see here.
Choosing the correct format (--format
) is crucial. Otherwise, Teamscale cannot interpret the report file and will log errors to the Worker Log. It must be one of the formats supported by Teamscale. Casing does not matter.
teamscale-upload
will exit with code 0 if the upload was fully successful. Any other exit code indicates a problem that you must address. The problem is logged to the console.
Run teamscale-upload --help
to see all available options.
Necessary Permissions in Teamscale
The user you supply on the command line must have the Perform External Uploads permission for the project to which you are uploading the reports. For a production setup we recommend that you create a technical user in Teamscale and assign it the Build role for that project.
Self-Signed Certificates
If you're accessing Teamscale via HTTPS and are signing the certificates yourself or via an internal CA, you can simplify the initial setup of this tool by using --insecure
. This will disable certificate validation.
Once you're successfully uploading your analysis results, you can re-enable validation and provide the necessary certificates as a Java key store via --trusted-keystore
.
Specifying the Teamscale access key
teamscale-upload
offers two ways to securely specify the Teamscale access key:
- Via specifying the environment variable
$TEAMSCALE_ACCESS_KEY
- Example:
$ export TEAMSCALE_ACCESS_KEY=<key>
$ teamscale-upload --server http://<TEAMSCALE_URL> --project <PROJECT> --user <USER> --partition <PARTITION> --format <FORMAT> <PATH/TO/REPORT/FILE>
- Via the standard input by specifying
accesskey -
and then entering the key via stdin. - Example:
$ cat ~/accesskey.txt | teamscale-upload --server http://<TEAMSCALE_URL> --project <PROJECT> --user <USER> --accesskey - --partition <PARTITION> --format <FORMAT> <PATH/TO/REPORT/FILE>
Additional way to specify the Teamscale access key (not recommended)
There is one additional unrecommended way to specify the Teamscale access key: Entering it directly via the command-line option --accesskey <key>
. As it leaks the provided key to the standard output in an uncensored fashion, it should only be used when the above options are not applicable.
$ teamscale-upload --server http://<TEAMSCALE_URL> --project <PROJECT> --user <USER> --accesskey <key> --partition <PARTITION> --format <FORMAT> <PATH/TO/REPORT/FILE>
Manually specifiying the commit to upload to
If you want to manually enter a commit you can do so with the --commit
parameter. It allows the following formats:
- The identifier of the supported VCS:
- SHA Hash of the commit: e.g.
0842320ef195b2d9190ab32b2d24b4a77eb522b1
- SVN revision number: e.g.
1352456
- Team Foundation changeset ID: e.g.
1567
- SHA Hash of the commit: e.g.
It is also possible to use --branch-and-timestamp
. For details on this format please refer to the glossary.
Uploading Multiple Report Files of the Same Format
If you need to upload more than one file of the same format, you can either specify more than one path on the command line or use wildcards:
*
matches any number of characters in the current directory or file name**
matches any number of nested directories?
matches any single character
Always Use Single Quotes Around Wildcard Strings
Since *
and ?
are characters that are also interpreted by your shell, you must always use single quotes around paths with wildcards.
E.g. 'path/**/report*.xml'
matches all of the following:
path/to/nested/report.xml
path/report.xml
path/report123.xml
Uploading Multiple Files with Different Formats
If you need to upload more than one type of report, e.g. test coverage results and findings, you can use an input file. In this file, you can define multiple formats and one or more patterns per format, e.g:
[jacoco]
jacoco1/**.xml
jacoco2/**.xml
[findbugs]
pattern1/**.findbugs.xml
pattern2/**.findbugs.xml
Use this file with the --input
command line option:
teamscale-upload --server http://<TEAMSCALE_URL> --project <PROJECT> --user <USER> --accesskey <ACCESSKEY> --partition <PARTITION> --input <PATH/TO/INPUT/FILE>
Upload via GitHub Action
If you are using GitHub as your CI system we recommend using teamscale-upload via GitHub Action which is available on the GitHub Marketplace. It provides the same options as teamscale-upload and can be used as follows:
uses: 'cqse/teamscale-upload-action@v2.4.0'
with:
server: 'https://demo.teamscale.com'
project: 'teamscale-upload'
user: 'build'
partition: 'Github Action > Linux Branch And Timestamp'
accesskey: ${{ secrets.ACCESS_KEY }}
format: 'SIMPLE'
message: 'This is a test message.'
files: 'test_resources/coverage.simple test_resources/coverage2.simple'
A more detailed description of the parameters can be found here.
Upload via REST API
If you cannot use the teamscale-upload
utility, you can also manually upload files via our REST API. We do recommend that you use teamscale-upload
if possible since it includes helpful handling for many common errors, typos, etc.
The REST service external-analysis/session/<SESSION_ID>/report
allows uploading results from analysis tools like FindBugs/SpotBugs or different coverage tools.
Prerequisites
In order to upload a report to Teamscale, the server which is running the Teamscale instance must be accessible via HTTP(S).
As a first step, create a new user which will be able to upload external analysis results to Teamscale. You can either use your administration account for this, or better use a special user with the Build
role.
Uploading a single report
To upload a single external analysis report to Teamscale for the current HEAD commit (see below for uploads for historic dates), issue the following HTTP request:
HTTP method:
POST
Request URL:
bashhttp://<YOUR_TEAMSCALE_SERVER>:<PORT>/api/projects/<TEAMSCALE_PROJECT_ID>/external-analysis/session/auto-create/report?format=<FORMAT>&partition=<PARTITION>&message=<MESSAGE>
YOUR_TEAMSCALE_SERVER:PORT
is the address and port of the Teamscale instance.TEAMSCALE_PROJECT_ID
is the ID of the Teamscale project into which to store the coverage.FORMAT
is the report format of the uploaded file. It must be one of the formats supported by Teamscale.PARTITION
is a logical name for the type of coverage. All coverage with the same partition will be grouped together. This, e.g., allows you to upload both coverage for your Java and your JavaScript tests without the results of one overriding those of the other. The name is descriptive and can be any arbitrary string.MESSAGE
is the message shown in the Activity perspective to the user. It should make clear that coverage was uploaded.
Body: Form data. May contain any number of elements with name "report" and the content of a single result XML file.
Authentication: HTTP Basic Authentication with the build or admin user and their Access Token (not their password!)
Sample Upload with Curl
curl --request POST --user bob:e434mikm35d --form "report=@coverage.xml" "http://teamscale-server:8080/api/projects/myProject/external-analysis/session/auto-create/report?format=JACOCO&partition=Unit%20Tests&message=Unit%20Test%20Coverage"
Always Quote the URL
Please note that the URL must be enclosed in quotes. This is required as most shells treat the &
character specially and would thus truncate the URL. If you are not sure you quoted the URL correctly, you can use curl -v
to get verbose output from curl, including the actual URL it tries to contact.
File encoding
If not explicitly specified, Teamscale will interpret the encoding of the provided file based on it's BOM, or default to UTF-8. Change the charset
of the Content-Type
header when uploading a Teamscale report via curl
to prevent parsing errors because of misinterpreted encodings.
An example how to upload a report with a specified UTF-16 encoding:
curl --request POST --user bob:e434mikm35d --form "report=@coverage.xml;type=text/xml;charset=UTF-16" "http://teamscale-server:8080/api/projects/myProject/external-analysis/session/auto-create/report?format=JACOCO&partition=Unit%20Tests&message=Unit%20Test%20Coverage"
Note the type definition type=text/xml;charset=UTF-16
after the file name.
Request Parameters
Details about the service's parameters can be best retrieved via the online service documentation as described here.
In addition to the common pitfalls when dealing with the REST API, you need to check if the external tool is activated in the project's analysis profile.
Furthermore, please refer to the details for supported upload formats.
Uploading data for historic dates or revisions
Note that the uploaded results will be used for the current HEAD commit in Teamscale on the main branch (e.g., latest commit on master for Git, latest changeset on the TFS main branch or latest commit on trunk for SVN). Should this not be the behaviour you want, you can override this by adding the following to the request's URL:
&revision=<REVISION>
where REVISION is, depending on your version control system, a:
- full Git SHA hash, or
- SVN revision number, or
- TFS changeset ID
Use only during session creation
Note that the revision
parameter has to be supplied when creating the session. It will not work for individual uploads.
Please refer to interactive REST API documentation to view all parameters.
Regardless of whether the revision has already been analyzed or not, Teamscale will store the external analysis results for the code with the revision. When later it is analyzed, the upload is integrated with the revision into Teamscale.
If concerned project has a multi-connector setup, the VCS repository that contains the revision for which an upload is meant, can be specified in the request's URL using the repository identifier (see connector options) for the project connector:
&repository=<REPOSITORY_IDENTIFIER>
Teamscale also supports uploading external analysis results to timestamps. This can be achieved by adding the following to the request's URL:
&t=<TIMESTAMP>
If you additionally need to specify the branch for the upload, you can use the t
parameter as well:
&t=<BRANCH>:<TIMESTAMP>
Uploading Multiple Reports At Once (Sessions)
If you want to upload multiple reports in a single logical commit, you should use Teamscale's session concept. For session handling, the service external-analysis/session
is used. The following example uploads two different reports as a single commit using curl. For more details, see the service documentation.
curl --request POST --user bob:e434mikm35d http://localhost:8080/api/projects/my-project-id/external-analysis/session?message=myMessage&partition=myPartition
This command will return the session ID, in this case Sdfb38cb0-c348-40fb-82f0-144bf0e41beb
. Using this session, we can upload multiple reports.
curl --request POST --user bob:e434mikm35d --form "report=@./MyReport.xml" http://localhost:8080/api/projects/myProject/external-analysis/session/Sdfb38cb0-c348-40fb-82f0-144bf0e41beb/report?format=FINDBUGS
curl --request POST --user bob:e434mikm35d --form "report=@./OtherReport.xml" http://localhost:8080/api/projects/myProject/external-analysis/session/Sdfb38cb0-c348-40fb-82f0-144bf0e41beb/report?format=FXCOP
Finally, we commit the session using a POST request:
curl --request POST --user bob:e434mikm35d http://localhost:8080/api/projects/my-project-id/external-analysis/session/Sdfb38cb0-c348-40fb-82f0-144bf0e41beb
Upload via Gradle Plugin
With our Teamscale Gradle Plugin you can easily upload artifacts directly from Gradle without any additional tooling involved. It can automate uploading JUnit, JaCoCo and Testwise Coverage reports. This guide expects that you have already configured your test tasks and the jacoco plugin.
Test Impact Analysis
For more in depth instructions on how to set up Testwise Coverage collection and Impacted Test execution please refer to our tutorial.
You can apply the plugin by adding it to your plugins block of the top level project.
plugins {
id "com.teamscale" version "27.0.1"
}
The teamscale
extension allows you to configure where reports should be uploaded to:
teamscale {
server {
url = 'https://teamscale.mycompany.com:8080'
project = 'test'
userName = 'build'
userAccessToken = property('teamscale.access-token')
}
}
The example assumes that you pass the Teamscale Access Token to Gradle via
gradle -Pteamscale.access-token=82l1jtkIx6xG7DDG34FLsKhejcHz1cMu ...`
By default the plugin will try to detect the correct commit it needs to upload to from the Git repository. In case the Git repository is not present when Gradle is executed, which is the case in some CI setups, you need to manually specify the commit hash:
teamscale {
commit {
revision = System.getenv('CI_COMMIT_SHA')
}
}
You can specify which artifacts you want to upload and to which partition via the reports
block:
teamscale {
report {
jacoco()
junit()
testwiseCoverage()
}
}
These calls will configure all existing JacocoReport
, Test
and TestImpacted
tasks respectively to produce the necessary reports and pick them up after the tasks were executed.
All aspects of the upload can be configured individually:
teamscale {
report {
jacoco {
// The partition all JaCoCo reports should be uploaded to
// This is the only required setting
partition = "Unit Tests"
}
junit {
partition = "Automated Tests"
// This is the message that will be shown in Teamscale for the artificial coverage commit (optional)
message = "JUNIT gradle upload" // This is the default value
// Whether reports of this type should be uploaded (Optional, default is true)
upload = true
}
}
}
tasks.named("test") {
// ...
teamscale {
report {
// partition, message and upload can also be overwritten for individual tasks
upload = false
}
}
}
The actual upload is performed with the teamscaleReportUpload
task of the top level project. It is not executed automatically, so you need to explicitly call it either as part of your CI Gradle command
gradle --continue test jacocoTestReport teamscaleReportUpload
or model the desired behavior in Gradle e.g.
tasks.named("jacocoTestReport") {
// Report should always be uploaded after it was generated
finalizedBy teamscaleReportUpload
}
tasks.named("teamscaleReportUpload") {
// Report generation is required to run before generating the report
dependsOn jacocoTestReport
}
Upload via Maven Plugin
Analogously to the Gradle plugin, there is a Teamscale Maven plugin that can be used to upload JaCoCo reports to Teamscale. It can be used to automatically upload Unit, Integration and Aggregate test coverage reports generated by the JaCoCo Maven plugin to your Teamscale instance. This guide expects that you have already configured the JaCoCo plugin.
Recording testwise coverage
The Teamscale Maven plugin can also be used to directly record and upload test coverage to Teamscale. In addition, it is possible to record testwise coverage with it. Please have a look at our testwise coverage section for a more detailed example.
The plugin offers a goal called upload-coverage
which can be invoked after the JaCoCo test reports have been generated.
Check the JaCoCo configuration
Make sure that JaCoCo is configured to generate XML reports.
Configure where reports should be uploaded to by supplying the <teamscaleUrl>
and <projectId>
. An example configuration could look like this
<plugin>
<groupId>com.teamscale</groupId>
<artifactId>teamscale-maven-plugin</artifactId>
<version>32.4.3</version>
<executions>
<execution>
<goals>
<goal>upload-coverage</goal>
</goals>
</execution>
</executions>
<configuration>
<teamscaleUrl>http://localhost:8080</teamscaleUrl>
<projectId>my-project</projectId>
<username>build</username>
<accessToken>6lJKEvNHeTxGPhMAi4D84DWqzoSFL1p4</accessToken>
<endCommit>master:HEAD</endCommit>
<unitTestPartition>My Custom Unit Tests Partition</unitTestPartition>
</configuration>
</plugin>
You can optionally specify an endCommit
, if left out the currently checked out commit will be automatically detected from the project's git repository. You can use unitTestPartition
to define the partition to which unit tests should be uploaded to, similar to integrationTestPartition
and aggregateTestPartition
.
Invoking mvn com.teamscale:teamscale-maven-plugin:upload-coverage
will collect all generated XML reports and upload them to each partition in a single session. If you have a Maven project consisting of multiple submodules, you have to invoke the command in the root project.
Uploading from Jenkins
For Jenkins users we also have a plugin that provides an "after build step" to do the upload. For detailed instructions please see the Jenkins plugin page.
Upload from Azure DevOps
A plugin is provided for directly uploading coverage from TFS / Azure DevOps.
In order to authenticate against Teamscale, first add a variable to your pipeline that contains a Teamscale access key. When using the classic editor, you can add it on the "Variables" tab.
When using a YAML pipeline, you have to open the pipeline for editing, then click on the "Variables" button.
Change the variable type to "secret".
Add the task "Teamscale Report Uploader" to your pipeline.
Fill the following parameters:
- Files: Ant pattern for the reports to be uploaded.
- Report Format ID: You can find all valid IDs on the Supported Upload Formats and Samples page. If you want to upload multiple report formats, you need to add one pipeline task per format.
- Teamscale URL: The URL under which Teamscale is accessible.
- Username: The Teamscale username for which you have generated the Teamscale access key.
- IDE Access Key: The secret variable you have added to the pipeline, e.g.
$(teamscaleAccessKey)
. - Teamscale Project: The project ID of the corresponding Teamscale project. You can copy this from the URL when you open the project in Teamscale or find it in the Projects view.
- Partition (see Glossary): An arbitrary name to associate with the report uploads. Note that it's recommended to have a separate partition per pipeline and per report format, e.g.
My Pipeline - Cobertura
. - Path to CodeCoverage.exe (optional): This must be defined if you want to upload a binary Visual Studio coverage file. The task will use this to convert the binary file to an XML file before the upload. For detailed instructions on how to convert the coverage files, see the TFS / AzureDevOps Section.
The task will automatically resolve which revision it's running for, and will use this revision for the upload.
Import via Artifactory / S3
External analysis results can be directly imported to Teamscale via the Artifactory or S3 repository connector. The connector will extract the results from the repository, process the reports and store them in the configured partition.
Layout Prerequisites
Before importing external analysis results to Teamscale via Artifactory / S3, please ensure the following:
- All relevant external results reports are available in Artifactory / S3.
- The external results reports are packed in archives (i.e. ZIP files).
- The folder structure used follows this pattern:All profilers / upload tools provided with Teamscale follow this format automatically, making it easy to configure. If you use your own schema, make sure to adapt the extraction patterns accordingly.
/ arbitrarily / many / directories / before / the / upload / root / 'uploads' / <branch name> / <commit timestamp in ms (Java long)>[-<commit hash>]? / <upload partition> / <artifact type (EReportFormat, case insensitive), e.g. 'jacoco'> / none / or / some / intermediate / directories / / <Zip archive file>
Configuration Steps
To import external results reports to a project:
- Add an Artifactory / S3 connector to the project.
- Configure the Artifactory connector options so that Teamscale can extract the reports from Artifactory.
- If you used the standard path format as specified in Prerequisites, you can skip this step. Otherwise, configure the following options so that Teamscale can process the reports:
Analysis report mapping
: Ant pattern used to map the reports inside the ZIP archives to their corresponding report format. Example:**/junit.xml -> JUNIT, **/jacoco.xml -> JACOCO
Partition Pattern
: Regular expression used to extract the partition from the path of the artifacts. The first non-null group is used as partition, e.g./coverage/([^/]+)/
would use the sub-folders of coverage as the partition names.
- Save the project.
Configuration Example
Below is the structure of the Artifactory containing the external results reports.
lcov-reports.zip
contains the LCOV coverage report (lcov.info
).findings-reports.zip
contains the external findings report for findings (warnings.json
).
- MyRepository
|
| -> MyProjectName
|
| -> uploads
|
| -> master
|
| -> 1578650400000-4e8afe52737255ba40875a1af1fced1e08fc5316
|
| -> coverage
|
| -> LCOV
|
| -> lcov-reports.zip
|
| -> findings
|
| -> GENERIC_FINDINGS
|
| -> findings-reports.zip
|
| -> 1578651300000-70eb476d01abd0d877d064f57a6da5e14c677687
|
| -> coverage
|
| -> LCOV
|
| -> lcov-reports.zip
Accordingly, the Artifactory connector's configuration would look as follows:
And this would be the end result and how the uploaded results via Artifactory would appear in Teamscale:
Storing analysis results in an external storage backend
TIP
This section does not pertain to imports from storage systems and is only concerned with direct uploads to Teamscale.
By default, uploads of external analysis results over the REST API interface offered by Teamscale will be stored in Teamscale's internal storage. However, Teamscale also provides the option to use an external storage backend. Some advantages of using an external storage backend are:
- Data can be shared over multiple projects without duplicating it, which is especially useful for shadow instances and project copies.
- Since external storage backends such as S3 or Artifactory are typically built with additional data-safety measures, they can often offer better data persistence guarantees.
- The size of Teamscale backups is not impacted by external analysis results, as they are pulled from an external source.
- Dedicated storage services can provide much more elaborate data management than Teamscale.
- External data might need to be persisted anyway. In this case, duplication is avoided.
- Teamscale can always access the full report data, leading to better analysis results in certain rollback scenarios.
Configuring Teamscale to Store Analysis Results In S3
S3 server required
This requires an S3 server for storage, which is not shipped with Teamscale.
Required permissions
This configuration can only be done by users with permissions to create external storage backends. The Force selected default for all projects
option can only be toggled by server administrators.
Copying projects with external storage backend
The copy of a project with external storage backend will keep the mapping to the external storage backend of the original project. Thus, it also contains the stored analysis results of the original project. To create a separate external storage backend mapping for the copy, uncheck the "Copy all project data checkbox". Without further changes the resulting copy will store its analysis results in the global default for storage backends.
- Create an external account, containing the URL, user and API key. The account has to specify "multiple systems" (default), "S3", or "External Storage Backend (S3)" to show up in the following configuration.
- Navigate to
Admin -> Storage Backends
. This page contains a list of all configured external storage backends (internal storage is not displayed, since it is the default). - Click
New external storage backend
to open the configuration dialog.
Click here for a detailed explanation of every field in the dialog.
Field Name | Description |
---|---|
Name | An arbitrary name to identify this storage backend. This can not be changed once the storage backend is created. |
External Account | Contains the credentials and URL to authenticate with S3. The selected S3 account needs at least creation and deletion permissions on the S3 server to allow uploading data. |
Protocol | The protocol used by the storage system S3 in this case (other protocols - e.g. Artifactory - will be supported in the future). |
Repository or bucket | In case of S3, this is the bucket name Teamscale should upload to. |
Upload Path Prefix (Optional) | By default, Teamscale will upload external data to the root of the selected S3 bucket. This option can be used to specify an optional subdirectory (relative to the S3 bucket root) where the uploads will be stored instead, e.g. teamscale/external-analysis-uploads/ . |
Use S3 credentials process | Same as "Use Credentials Process" in the S3 connector options. If you are not sure about this, we recommend keeping it disabled. |
- Once the external storage backend is configured, it still needs to be enabled to have any effect. There are two ways to enable an external storage backend: on a global basis, or on a project-by-project basis.
By default, the
Force selected default for all projects
toggle onAdmin -> Storage Backends
is enabled. This means that the external storage backend of all projects is configured globally by the instance administrator, and a project administrator cannot overwrite this for individual projects. When no specific external storage backend is selected, the default fallback is Teamscale's internal storage. This is the case for the initial configuration state. As a result, all projects will store external upload data in Teamscale's internal storage. To change the default, select a storage backend from the list as the new default. See below for a screenshot of the default selection button. The currently selected default is highlighted in green and shows a checkmark icon. This will cause all new external analysis results on all projects to be stored in the selected storage backend instead of in Teamscale's internal storage.Sometimes, you may want to migrate projects on a server one-by-one, or even use multiple external storage backends at the same time. To do this, disable the
Force selected default for all projects
toggle. Project administrators can then select an external storage backend for each project. To change the external storage backend of a project, navigate to theEdit Project
page and select the storage backend from the dropdown. Note that even when the Force selected default for all projects toggle is deactivated, the instance administrator can still select a global default external storage backend. Project administrators are then free to either configure a specific, custom storage backend for their projects, or use the global default instead.
Here's a screenshot of the global default button.
DANGER
Changing the selected storage backend (even by changing the global default) may change the analysis history of the respective projects, automatically triggering a (partial) re-analysis.
Changing the external storage backend only takes full effect on the next upload.
Some effects of changing the external storage backend mentioned on this page might only take effect starting with the next upload.
Using multiple external storage backends with the same project.
A project can only have one active storage backend at any point in time to uniquely identify the target for new uploads. However, if you want to switch between different storage backends, the connector configuration of the automatically generated S3 can be copied into a normal S3 connector to continue reading from the old storage, while already uploading to the new storage.
Configuring Uploads
If you are already uploading external analysis results in Teamscale via any of the described upload methods, no changes are required on this end. Once the external storage backend is configured in Teamscale, it will be recognized by Teamscale and uploads will be redirected automatically.
Disabling External Storage Backends
TIP
Resetting the Teamscale configuration does not delete any data in the external storage backend.
Similar to enabling an external storage backend, disabling can take two forms:
- If the instance has a globally enforce external storage backend (
Force selected default for all projects
is enabled), simply deselect the current default. - Otherwise, i.e., if
Force selected default for all projects
is disabled, one option would be to enable it and ensure no default is selected, which forces all projects to use Teamscale's internal storage again. Alternatively, if only one project should go back to internal storage, selectInternal Storage
on the project edit page.
Lastly, if new analysis results were indeed uploaded while the external storage backend was enabled, Teamscale has generated a new External Storage Backend
connector in the project. This connector will continue reading data from the external storage backend, hence keeping all existing external analysis results intact. If you want to remove all existing external data imported through this storage backend from a project, delete this connector as well. It will not be generated again as long as internal storage is used by the project.
TIP
Since the auto-generated connectors remain in the projects even if the external storage backend is disabled globally, the current analysis state is not immediately impacted by this change. This avoids losing analysis state or even causing a full re-analysis of all projects.
Migrating External Analysis Results Between Storage Backends
Automated migration between different external storage backends is not supported. However, data can be moved between external storage backends manually, as long as the directory structure stays intact.