For .NET 6 a native implementation of the Software Telemetry .NET API is now available as "softwaretelemetry" NuGet package, which also supports Software Telemetry Counter. To use the native library, set the environment variable APM_DOTNET_TRANSPORT.
Enumerated integer values returned by a SNMP Counter are now mapped to their respective label. In addition, also values returning an IP address can now be evaluated and SNMP indices are read from the MIB to map the available instances correctly.
If you analyze a specific request and e.g. you want to find out what the specific user additionally did at this point of time, you now can go directly from the Associated Requests of that request to the Research View by clicking on the "Open Research" button. Two minutes before and after the start time of the request is then used as filtered time range for the Research View. The value of a column of an Associated Request can also be set as filter for the Research View by a right-click on the value and selecting "Open Research View Filtered".
Sending mails using the curl library is now also supported on Microsoft Windows in addition to Linux.
If the column length of a column defined in the Log Definition is increased for an already defined Log Pool, the respective column is also altered automatically in the database table to prevent errors when writing values to the database.
To allow the export of many requests (up to 10000) within one zip file, it is now possible to export the result of a query in the research view. Click "Download" and select "All Requests" in the download dialog. It may take several minutes to export the data, so keep the request count low by adjusting the time range and the filter criteria. Make sure to clean up the downloaded session from the software-telemetry session files list in order to free the disk memory on the server.
TCP Transport between the Fabasoft app.telemetry Agent and the Library is secured by using TLS both for the Software-Telemetry C/C++ and the Java library. To secure communication, you have to set the configuration parameter "TelemetryTLSPort" in the agent configuration (10008 by default) and also configure this port on the library side correspondingly by setting the environment variable APM_TRANSPORT=tls://<agent hostname>:<port>. You can still have the agent listen in parallel on the unsecure port (10002 by default) to receive telemetry data from libraries of older versions.
SNMP Version 1 and 2c, supported by app.telemetry over many years, use unencrypted UDP packets to query data from servers or network nodes. SNMPv3 supports authentication and encryption of SNMP traffic.
Until now only single Service Checks could be forced via right-click and selecting "Force Service Check". This is now possible also for a Service Groupe, where you then force all Service Checks belonging to this group at once.
To be able to track long running requests, which are running at a defined point in time, you go to the Research View, enter the specific time as end time of the search interval, select a suitable start date and click the new “Running” checkbox. The query will show you all requests from the selected interval which were running at the beginning of the second specified as end time.
While analyzing subrequests originated from an application which has the “Don't Resolve Subrequests” setting selected in the according Log Pool. By clicking on the new link in the Parameter column of the GetContext event, the request will open in a new browser window. These links are not available from sessions.
All Fabasoft app.telemetry Services are supported also on Microsoft Windows Server 2022.
To analyze specific requests in detail, you can open the “Request Details” view visible at the bottom in the Software-Telemetry and Software-Telemetry Research View in an own browser window. A new window is opened by selecting a request and clicking on “Open in New Window” in the left sidebar. Alternatively, you can copy the link referencing to the selected request by choosing “Copy Link” in the context menu of a request. This link can then be sent to other users, so that they can directly open the corresponding request for analysis.
You can copy the link also within the “Details” tab in the “Request Details” view, to refer directly to the currently visible node within the request, or in the “Associated Requests” tab to obtain the direct link to an associated request. The corresponding symbol is located in both cases at the bottom bar.
If you don’t want to load full requests including all associated subrequests for a Log Pool, you can set the property “Don’t Resolve Subrequests” in the Log Pool configuration, which can alternatively be set in the corresponding Log Definition as “dontResolveSubrequests” flag in the root “APMLogAnalyzer” element.
When this property is set, request contexts are only resolved down to the level of the corresponding application of this Log Pool. Associated subrequests are still shown in the “Associated Requests” tab in the Request Details pane. They can be loaded in full detail by selecting the corresponding subrequest and clicking on “Copy Link”, then this specific subrequest can be opened in a new window.
You can add an additional application filter in the Log Pool configuration to filter for application properties that have been registered in the instrumentation of your application or have been set as environment variables with the prefix “APM_PROPERTY_” in your system. The filter is entered in the form of an SQL like query and can be used to configure separate Log Pools for applications with the same registration but differing in other properties, e.g. to distinguish between different systems (production and test system) with their applications otherwise identical.
If you want to have the screenshot of a feedback sent as attachment in the notification mail, you may check the “Add Feedback Session Attachment” property in the configuration of your SMTP Notification Account.
Counters defined via Software Telemetry can now be directly shown in your dashboard using the “Software Telemetry Counter” data source type when creating a new chart. To specify the counters for the chart, a SQL like query string is entered to filter for specific counter attributes.
To hide specific columns in Log Pools, Filtered Log Pools or Log Pool Views, set the displaypriority for the relevant column in the Log Definition to -1. This also applies for system columns such as “Agent”, if you want to hide internal hostnames from users having access to that Log Pool.
If you are looking for a particular text in the telemetry points within a particular time interval, you can use the new full-text search field in the Research view to identify all the requests containing this specific search string.
Specific columns containing personal data (e.g. Loginname, user group) can be automatically anonymized by setting a corresponding “anonymous” flag in the Log Definition. Additionally, the Data Anonymization has to be activated in the Log Pool Properties, where also the number of days is set after which data is anonymized.
The entries for a column to be anonymized are encoded using a random GUID which changes every day. This GUID is written to the database instead of the real entry and the information which is used to decode the GUID into the real value again is stored only for the number of days specified for the anonymization. Therefore, for personal data older than the specified number of days only the GUID can be read, enabling to still distinguish between e.g. different users, but not showing the directly related personal information any more. For younger data the real values as described in the Log Definition are shown enabling you to still read the personal data directly for this specific time period.
If the anonymization is activated for an already existing Log Pool without former anonymization, older data won’t be anonymized and new data arriving after the activation will be stored in a new table with the corresponding anonymization. Personal data contained in the raw data files, in sessions or feedback dialogs is not anonymized.
The anonymized data encoded by the GUID is also considered for the statistics accordingly.
In addition to the anonymization of columns containing personal data, such columns can be hidden altogether from specific users or groups by entering such users and groups in the Log Pool Properties next to “Hide Personal Data from User/Group”.
Applications like Fabasoft Native Client process data in the context of multiple services each using their own app.telemetry server. The correct assignment of telemetry data to the right app.telemetry server is now supported by using a common directory per service. Initialize and close your service directories as required and associate your registered applications with the directory context, so that you can send each applications telemetry data to the correct telemetry server also in case of later data recovery processing.
When registering a telemetry counter, you can automatically configure a corresponding service check for this counter by specifying at least one of two new attributes in the registration:
A Kubernetes namespace provides the scope for Pods, Services and Deployments in a Cluster. The namespace is thus an important information where a specific container belongs to. As Fabasoft app.telemetry should know where services belong to, the Fabasoft app.telemetry library attaches this namespace information to the application registration as an application property (apm:namespace). There is currently no documented source available, where the namespace can be read from, but if there is a file named /run/secrets/kubernetes.io/serviceaccount/namespace it contains the namespace and the library reads that file. If the file is not available, the Kubernetes Downward API should be used to provide an environment variable named “APM_NAMESPACE” containing the namespace of the pod. The Fabasoft app.telemetry Library will pass the value to the apm:namespace application property.
All Fabasoft app.telemetry Services are supported also on RHEL/CentOS 8.
The Fabasoft app.telemetry Agent can provide an http endpoint for retrieving Software-Telemetry Based Counters of all connected applications in the OpenMetrics (Draft) format, which is supported e.g. by Prometheus. Enable this feature by providing apptelemetryagent command line options openMetricsBindAddress and openMetricsBindPort.
The new “gap” column denotes the time between the previous and the current telemetry event, which helps in finding delays in not instrumented parts of the code.
Automatic cleanup of Log Pool tables and statistics is supported on a daily basis. Since deleting records is at least as costly as the insertion of new records, the Fabasoft app.telemetry 2020 supports the automatic usage of partitioned tables in PostgreSQL (as implemented since PostgreSQL 11). Therefore, every day a new partition table will be created to hold the records of that particular day. Cleanup is performed by simply dropping outdated partitions. There is no migration of old table layout performed, so switching from a non-partitioned table layout to a partitioned table layout requires manual migration of data if necessary. An app.telemetry database connection can either use partitioning for all tables or store all the data in the traditional table layout. But you can create an additional database connection to migrate the log pools by changing the database connection assigned and optionally migrating data.
Linux systems provide status data and performance measures in files under the /sys folder. There are a number of such counters that are not accessible using SNMP, so Fabasoft app.telemetry agents can now access the files in the /sys folder to read the respective counters.
Additional functions are available calculation counter values:
Grafana is a common visualization tool used heavily in container environments. Fabasoft app.telemetry now supports JSON REST queries, that reflect the Software-Telemetry counters in a format compatible with Prometheus data sources. So you can integrate Fabasoft app.telemetry Software-Telemetry counters in your Grafana dashboards. See the REST-API documentation on the product kit for more details.
The Sunburst visualization of the Activity Statistics data operated on the unfiltered statistics data regardless of the filters set in the Stream, Time Line or Data Grid view. Now, filters are respected and can even be extended based on the selection within the Sunburst graph. In the Stream and Time Line visualization an additional Dimension “Agent” is supported.
The Fabasoft app.telemetry rpms now support installation within RHEL/CentOS 7 containers. So a Fabasoft app.telemetry server can now be hosted in a container environment e.g. on a RedHat OpenShift Container Platform.
To improve application identification in a container environment, an application is now identified by a random GUID instead of Agent Id and Process Id, which is not unique in a container environment, where the main process of each container is started with pid 1. This change requires the extension of the block header format of the software telemetry data, which makes the data incompatible with older versions of Fabasoft app.telemetry. Therefore, an older Fabasoft app.telemetry Server cannot read requests exported from a version 2020 server. And during the upgrade process, an older Fabasoft app.telemetry Server will not be able to process telemetry data received from already upgraded Fabasoft app.telemetry Agents version 2020. So the preferred upgrade sequence implies to upgrade the Fabasoft app.telemetry Server before the agents.
The addition of a counter API based on the Fabasoft app.telemetry Software-Telemetry data transport enables applications to provide insight into their operation with minimal configuration. The Software-Telemetry based counter API also sidesteps limitations of Windows Performance Counters and SNMP by allowing for much greater flexibility to identify counter instances. Check the SDK-Documentation for more information.
Applications can register additional metadata about them that empowers operators to gain a better understanding of which specific services were involved in requests. A small set of predefined metadata is automatically provided by the Software-Telemetry API libraries, check the SDK Documentation for more information.
The new Fabasoft app.telemetry Configuration service manages app.telemetry configuration data such as:
The files that represent the listed configuration data changed their location, which may require changes to the configuration of backup software to ensure backups remain usable in the future. A running Fabasoft app.telemetry Configuration Service is very important for the correct operation of all Fabasoft app.telemetry services.
The Fabasoft app.telemetry infrastructure scripting gained functions to load and parse Log-Definitions (including forms), and JSON files into JavaScript objects that are directly useable.
Dynamic instrumentation of Java Applications using JVMTI has been removed.
Authentication using the Apache HTTPD module mod_auth_openidc with Keycloak is now supported. With appropriate configuration the Fabasoft app.telemetry Client can also provide autocomplete support for Keycloak roles to simplify the configuration of access permissions within Fabasoft app.telemetry.
The Software-Telemetry Research View implemented in Fabasoft app.telemetry 2018 supports the identification of requests matching complex filter criteria and longer timespans. In contrast to the standard Software-Telemetry Data View, which has been primarily designed for live request view, the Research View handles a dataset of (up to 10000) requests which is calculated on demand and can be sorted appropriately. The calculation of the dataset is done on request, asynchronously and can be interrupted, so the user has full control of the calculation and no more client timeouts may occur during calculation due to long running queries. The compulsory need of specifying the period of time to search in, allows to influence the duration of the query directly and helps avoiding long running queries. The analysis of single requests is available as well in the Research View as in the Data View. Navigation to the Requests from the Request Statistics view and the Request Categories view will lead to the new Research view to support the analysis of larger datasets.
In case you have customized the Apache HTTPD configuration for app.telemetry in /etc/httpd/conf.d/apptelemetrywebserver.conf it may be necessary to review the Fabasoft app.telemetry Webserver configuration file in /etc/httpd/conf.d/apptelemetry.conf to reapply your customizations.
On CentOS 7 app.telemetry services will no longer be started immediately after an installation to improve support for installations in systems that are not fully running (such as for example during an automated kickstart installation). Since you need to run the /opt/app.telemetry/bin/serversetup.sh script after an installation of a Fabasoft app.telemetry Server you will not notice a difference there. Fabasoft app.telemetry Agent installations on the other hand require a manual start of the app.telemetry Agent (or a system reboot if you prefer that).
In order to trigger status events in case of an invalid agent time status, additional counters are available from the “Server Statistics Counter” plugin under the agent object, which reflect the values of the agent view.
The “Time Drift (ms)” counter represents the absolute value of the time difference between host machine of the app.telemetry server and the host machine of the selected app.telemetry agent in milliseconds. In some environments (e.g. when using Kerberos authentication) it is essential to keep the time drift between machines within a small range (< 5 seconds). With the new counter you can set warning or error limits to get informed when the time difference between servers is out of the valid band.
The “RTT (ms)” counter represents the time in milliseconds it takes to send a simple request to the selected agent and receive the answer. The time depends on the quality of the network connection and the load of the systems involved.
In order to track the quality of the SSL encryption, the SSL Version and SSL Cipher property is logged for SSL connections.
In addition the remote port property is reported to allow identifying http connections based on the remote port.
Some counter (e.g. from SNMP sources) report status values as strings. These strings can be matched with regular expressions to generate a warning or error status.
In the Service Check configuration dialog you can either use the Numeric Ranges to specify a range of critical values as you could do this also in previous versions or you use the new Text Match to specify, which text values should trigger an error. The Pattern is a regular expression to match the value of the counter.
In this example, the counter is reported as an error, if the value is no equal to connected or ok.
The following example will generate a critical status, if the value starts with err.
To temporarily avoid the usage of dashboards or dashboard charts, they can now be deactivated by an administrator using the context menu in the configuration mode.
To avoid notifications, when the status changes between OK and warning, notifications can now be filtered not only by target status but also by source status.
Thus, if only Critical is selected as from and to status filter, then notifications are only sent, when a selected check, service or service group changes to critical or if it has changed from critical to any other status.
There are several password or passphrase parameters required for service checks, database connections. On order to safely store these parameters they are now encrypted using an RSA key so they are not readable in the infra.xml. The encryption keys are generated automatically by the app.telemetry server. Make sure to create a backup of the key pair encryption.(key|pem) located under /etc/app.telemetry/server/ or C:\ProgramData\Fabasoft app.telemetry\server\ to allow the server to decrypt the messages in case of restoring the infra.xml on another system.
SSL Version and SSL Cipher were added as parameters of the nginx softwaretelemetry module.
Some additional options have been implemented to allow further customization of your feedback dialogs. Choose your own fonts, define the border radius and shadow of the form and the buttons and hide the copyright text to adapt the feedback dialog to your website design.