Managed in the cloud, Tenable.io provides the industry's most comprehensive vulnerability coverage with the ability to predict which security issues to remediate first. It's your complete end-to-end vulnerability management solution.

Validation Criteria

Your integration with Tenable.io should meet the following criteria:

  • The integration must support bidirectional asset synchronization if the integration uses an asset model.
  • Ensure that all API calls made by your integration uses a standard User-Agent string as described in the User-Agent Header guide. This helps Tenable to identify your integration's API calls to assist with debugging and troubleshooting.
  • Contact Tenable via the Tech Alliances Application to demonstrate your third-party integration with Tenable's product or platform.
  • Explain how your integration uses Tenable's API and the specific endpoints that are being utilized. Tenable may ask to look at your integration's code to ensure scalability and to suggest best practices.
  • Ensure that your integration uses the proper naming conventions, trademarks, and logos in your integration's user interface. You can download a Media Kit on Tenable's media page.

Data Model

Tenable.io uses a discrete asset model with the following data types that are referred to throughout this guide:

  • Asset—An asset is an entity of value on a network that can be exploited. An asset can be anything, including laptops, desktops, servers, routers, printers, mobile phones, virtual machines, software containers, web applications, and cloud instances.
  • Plugin—A plugin is a vulnerability definition used to detect a vulnerability on an asset. Vulnerability definitions are also sometimes referred to as vulnerability signatures.
  • Finding—A finding is a single instance of a vulnerability appearing on an asset, identified uniquely by plugin ID, port, and protocol. For example: We discovered plugin 12345 on asset ABCD on port 1234.

The following diagram illustrates the relationships between these three data types.

Tenable.io Data Model

Several plugins can detect vulnerabilities on several assets, and a single plugin might detect vulnerabilities on the same asset but on different ports, so you can only uniquely identify a single finding by using a composite key (as shown in the data model diagram) with these properties:

  • asset.uuid
  • plugin.id
  • port.port
  • port.protocol

Compliance findings, on the other hand, use a simple composite key of check_id and asset_uuid.

  • check_id
  • asset_uuid

Data Exports

Tenable.io uses an asynchronous job-based API for the exportation of asset, vulnerability, and compliance data from the platform. A high-level workflow for exporting data from these APIs is illustrated in the following diagram.

Tenable.io Export Flow

  1. Initiate an export—Initiate a request to start an export job using your desired filters and job specification. The Tenable.io API returns an export job UUID when you initiate the request. You can use the following endpoints to initiate an export request.
  2. Check status of export job—Check the status of the export job and look for any chunks that are available. You can use the following endpoints to check the status of the export job.
  3. Download export data chunk—Download a chunk of data from the export. You can use the following endpoints to download data chunks.

Your third-party integration should pull data from Tenable.io in an efficient manner. It's not necessary to pull all data since Tenable.io uses a stateful dataset. Instead you should limit the data being exported to only the relevant data for your integration. You can easily export only the delta from the last run and maintain parity with Tenable.io without having to reprocess the same data by implementing the recommendations in the following section.

Asset Exports

Asset metadata is critical to maintaining the state of vulnerability findings since all findings are linked to an asset. It's possible to have orphaned findings if the asset state isn't taken into account. For example, if an asset was terminated or deleted from Tenable.io then the vulnerability findings associated with that asset no longer exist and the findings never transition to a closed state.

Assets themselves can exist in 3 distinct states:

  • Active—The default state of an asset. Active assets are live assets on a network.
  • Deleted—The state that occurs when an asset is deleted from the Tenable.io platform. Deleted assets have their vulnerability information deleted and the findings don't transition to a closed state.
  • Terminated—The state that occurs when an asset has been terminated in a cloud environment. If Tenable.io is connected to a cloud platform via a cloud connector, the connector relays created and terminated assets to Tenable.io and replicates that information within the platform. Terminated assets do not have their associated findings transitioned to a closed state.

Typically, the following workflow is used to keep asset metadata in sync:

  1. Export active assets from Tenable.io and merge any desired changes into your environment.
  2. Export deleted assets from Tenable.io and flag or remove the associated assets and findings from your environment as desired.
  3. Export terminated assets from Tenable.io and flag or remove the assets and associated findings in the same manner that you handled the deleted assets.

Tenable recommends that you use the following filters and settings when exporting asset data. Filters are combined to form a query by using a logical AND operator so we recommend using multiple exports for different states.

  • chunk_size:—Specifies the number of assets per exported chunk. The range is 100 to 10000. If this parameter is omitted, tenable.io uses the default value of 100. Tenable recommends that you use a size of 4000 to 5000 for the best performance.
  • filters.updated_at—Returns all assets updated later than the specified timestamp. This filter should be used for active asset deltas.
  • filters.deleted_at—Returns all assets deleted later than the specified timestamp. This filter should be used for deleted asset deltas.
  • filters.terminated_at—Returns all assets terminated later than the specified timestamp. This filter should be used for terminated asset deltas.

Example asset request with a chunk_size of 4000 and an updated_at filter:

POST /assets/export
  "chunk_size": 4000,
  "filters": {
    "updated_at": 1234567890

Example pyTenable code snippet with a chunk_size of 4000 and an updated_at filter:

For more information about asset exports with pyTenable, see Exports in the pyTenable documentation.

from tenable.io import TenableIO

tio = TenableIO(access_key='abc', secret_key='def')

for asset in tio.exports.assets(updated_at=1234567890,

Vulnerability Exports

The exportation of vulnerability findings is one of the most common use cases for the export API. The findings objects within the export data contain a multitude of information combined into a singular monolithic finding. It's tempting to use the information returned within this monolithic vulnerability finding export; however, you should avoid this due to the following potential issues.

  1. The asset sub-object within the finding is only there for backwards compatibility. Aside from the asset.uuid attribute, the rest of the data is simply a best guess since the asset sub-object stored within the finding doesn't account for multiple IP addresses, DNS addresses, etc.
  2. The plugin sub-object within the finding is only as current as the last observation of the finding. This means that if the plugin metadata, such as the vulnerability priority rating or the CVSS score, was updated since the finding was last observed then the plugin sub-object will not reflect those updates.

Vulnerability findings can exist within one of the following three states:

  1. Open—Findings that Tenable has determined to be active on the host.
  2. Fixed—Findings that have been remediated and have since transitioned from the active state to a resolved state.
  3. Reopened—Findings that were fixed but have been re-observed as an open finding.

Export chunks for vulnerability findings are compiled differently than asset export chunks. Unlike asset export chunks, vulnerability findings chunks are compiled in a multi-step process.

  1. The job handler searches for assets that match the asset-related filters and allocates them to different chunks.
  2. Each chunk is independently processed per the vulnerabilities associated with the assets within that chunk.

This multi-step process means that there can be a high degree of variability in the data sizes of each vulnerability findings chunk. Some might have no data and some might be fully populated with the specified number of assets per finding. Empty chunks of data (chunks without any findings) are automatically dropped from the export; however, these empty chunks are still included in the count of total chunks that were processed. This disparity is a common point of confusion so it's important to remember that the total chunks processed are not equal to the total number of chunks available.

When you create the vulnerability findings export you can define the number of assets per check via the num_assets parameter. Additionally because of the multi-step process described above, if you intend to pull findings data for assets past their licensable period (generally 90 days), then you should use the include_unlicensed parameter when you create the export request.

Tenable recommends that you consider the following parameters when creating a vulnerability findings export request.

  • num_assets—Specifies how many assets of findings are collected within each chunk of the export. The default is 50 and the maximum is 5000. Tenable recommends between 1000 to 3000 for the best performance.
  • include_unlicensed—Specifies whether or not to include unlicensed assets. Setting this parameter to true returns both licensed and unlicensed asset findings.
  • filters.severity—Specifies the severity of the vulnerabilities to include in the export. Tenable recommends that you set this parameter to ["medium", "high", "critical"] unless you need informational (non-vulnerability) or low severity (CVSS score below 4.0) findings.
  • filters.since—Only returns findings that have been observed since the specified Unix timestamp. This parameter uses the appropriate attribute for each findings state so it's recommended over discrete state filters.
  • filters.state—Specifies the state of the vulnerability findings that you want to include in the export. By default, Tenable.io only returns open and reopened findings so you need to specify all three states if you want both open and closed findings. For example, ["open", "fixed", "reopened"].

Example vulnerability export request for both licensed and unlicensed assets:

POST /vulns/export
  "num_assets": 500,
  "include_unlicensed": true,
  "filters": {
    "since": 1234567890

Example pyTenable code snippet:

For more information about vulnerability exports with pyTenable, see Exports in the pyTenable documentation.

from tenable.io import TenableIO

tio = TenableIO(access_key='abc', secret_key='def')

for finding in tio.exports.vulns(since=1234567890,

Compliance Exports

The logic used for compliance exports is less complex. Compliance export chunks are defined by the num_findings parameter and you can specify asset filters to narrow the scope of the results to only the asset UUIDs you're interested in. Tenable recommends that you consider the following parameters for compliance exports:

  • num_findings—Specifies the number of compliance findings per exported chunk. The minimum is 50, the maximum is 10000, and the default is 5000.
  • asset—An array of asset UUIDs to narrow the export results. If this parameter is unspecified, the export includes findings for all assets.
  • filters.last_seen—Filters the export results to include findings observed since the specified timestamp.

Example compliance export request

POST /compliance/export
  "num_findings": 5000,
  "filters": {
    "last_seen": 1234567890

Example pyTenable code snippet for a compliance export request

For more information about compliance exports with pyTenable, see Exports in the pyTenable documentation.

from tenable.io import TenableIO

tio = TenableIO(access_key='abc', secret_key='def')

for finding in tio.exports.compliance(last_seen=1234567890, num_findings=5000):

Plugin Exports

Exporting plugin details from Tenable.io is different than exporting account-specific metadata. You can use the List plugins endpoint to retrieve a paginated list of plugin information. The plugin endpoints are fairly easy to use; however, there are some filters you should consider using:

  • last_updated—Filters the response to include only plugins updated after the specified date. This filter accepts timestamps in ISO Date (YYYY-MM-DD) format instead of Unix format.
  • size—The number of plugin records to include for each page.
  • page—Specifies which page in the paginated data you want to retrieve. Pages start at 1.

Example plugin export request for plugins updated after 2022-12-01

GET /plugins/plugin?last_updated=2022-12-01&size=1000&page=1

Example pyTenable code snippet for a plugin export

For more information about plugin exports with pyTenable, see Plugins in the pyTenable documentation.

from tenable.io import TenableIO

tio = TenableIO(access_key='abc', secret_key='def')

for plugin in tio.plugins.list():

Asset Imports

You can import asset metadata to preload assets or to update assets within the Tenable.io platform. Asset metadata is processed using the same logic as scan data in order to ascertain if the asset exists or not. If the asset exists, the relevant action (create or update) is performed using the provided metadata, and the source presented in the import is appended to the asset as a source feeding the asset metadata.

The asset import API can be used to import asset data in JSON format. The maximum amount of data that can be sent in a single request cannot a 5 MB payload size. For an example of how to construct an asset import request, see Add Asset Data to Tenable.io in the Developer Portal. The asset import endpoint accepts the following parameters:

  • source—A user-defined name for the source of the import containing the asset records. While this parameter is a free-form string, Tenable recommends that you define the source in the form of "Vendor" or "Vendor Product".
  • assets—The asset list is a list of asset objects to import into the Tenable.io platform. Each asset object requires at a minimum at least the fqdn, ipv4, netbios_name, or mac_address properties. However, the more supported metadata that you include with an asset, the more likely a match can be made. For a list of supported properties, refer to Asset Attribute Definitions.

Example asset import request

POST /import/assets
  "source": "VendorName Product",
  "assets": [
      "fqdn": ["example_one.py.test"],
      "ipv4": ["", ""],
      "netbios_name": "example_one",
      "mac_address": ["00:00:00:11:22:33", "00:00:00:11:22:34"]
      "fqdn": ["example_two.py.test"],
      "ipv4": [""],
      "netbios_name": "example_two",
      "mac_address": ["00:00:00:11:22:35"]

Example pyTenable code snippet for an asset import request

For more information about asset imports with pyTenable, see Assets in the pyTenable documentation.

from tenable.io import TenableIO

tio = TenableIO(access_key='abc', secret_key='def')

job_id = tio.assets.import('VendorName Product',
    'fqdn': ['example_one.py.test'],
    'ipv4': ['', ''],
    'netbios_name': 'example_one',
    'mac_address': ['00:00:00:11:22:33', '00:00:00:11:22:34']
    'fqdn': ['example_two.py.test'],
    'ipv4': [''],
    'netbios_name': 'example_two',
    'mac_address': ['00:00:00:11:22:35']