This chapter contains technical and informational descriptions for features of Fabasoft app.telemetry not described in the “Installation Guide” or somewhere else.
With Fabasoft app.telemetry it is possible to get information about the location of a virtualized system which uses VMware ESX-Server virtualization technology.
Note from "vSphere Basic System Administration Guide": (ESX/ESXi 4)
ESX/ESXi includes an SNMP agent embedded in hostd that can both send traps and receive polling requests such as GET requests. This agent is referred to as the embedded SNMP agent.
Versions of ESX prior to ESX 4.0 included a Net-SNMP-based agent. You can continue to use this Net-SNMP-based agent in ESX 4.0 with MIBs supplied by your hardware vendor and other third-party management applications. However, to use the VMware MIB files, you must use the embedded SNMP agent.
By default, the embedded SNMP agent is disabled. To enable it, you must configure it using the vSphere CLI command vicfg-snmp.
Note: VMware does not officially support SNMP (GET requests) with ESXi products but since ESXi 4 it is possible to obtain the required SNMP counters.
The Fabasoft app.telemetry virtual host detection feature requires the VMware SNMP counters provided by the embedded VMware SNMP agent.
Enable the VMware SNMP support in one of the following ways:
To turn on SNMP support for VMware ESXi 4 use the vSphere remote CLI interface and turn on the embedded VMware SNMP agent which provides the required SNMP counters for VMware products
Test SNMP Configuration for VMware ESX/ESXi 4.0:
To test the correct SNMP configuration of VMware vSphere (ESX 4.0) / ESXi 4.0 servers check if the following SNMP counters are available from a remote system (especially from the app.telemetry proxy agent used for the VMware ESX server VM host detection):
Check the existence of the SNMP counters on the VMware ESX server from your proxy agent system by means of:
snmpwalk -v 1 -c <your_community> <ESX_server_IP> .1.3.6.1.4.1.6876
Example: snmpwalk -v 1 -c public 10.20.30.40 .1.3.6.1.4.1.6876
The Fabasoft app.telemetry notification system is based on a notification channel defining the way how to send a notification and several notification accounts which are notified of status changes. The following notification channel types are supported:
To set up the notification system correctly you have to create a notification channel first and then create sub elements of type notification account inside the notification channel.
Command Line Notification Channel:
Create a notification channel, choose "Command Line Notification" and define the notification command line to be executed on any status change. The command line consists of the absolute path of the command or script (on the Fabasoft app.telemetry server) and additional parameters passed to the command or script. Parameters with spaces must be quoted ("). The following variables can be used to pass concrete values to the notification command:
An example of such a command line looks like:
Example: Command Line Notification |
---|
/path/to/script.sh %FILE %TO %SUBJECT %AGENTHOSTNAME … this will result in the following call: /path/to/script.sh |
Set Timing Options for Notifications:
For some situations or in special installations you may need to tune some timing options for the notification system:
By default notification status is processed every 10 seconds starting 60 seconds after the app.telemetry Server service started. You may modify these settings by adding attributes to the respective NotificationChannel element in the infra.xml. To modify the notification interval, add an attribute scheduleSeconds with a value between 1 and 600 in seconds as the interval. To modify the time between the service start and the first notification, add an attribute delayOnStartSeconds with a number of seconds to wait between 0 and 3600. These two parameters cannot be changed at runtime
Example: Notification Channel Configuration |
---|
<NotificationChannel id="100123" name="Mailserver" status="0" type="smtp" delayOnStartSeconds="300" scheduleSeconds="20"> <Param key="Authenticate" value="Anonymous"/> </NotificationChannel> This configuration will delay the notification processing for 5 minutes after server start (instead of 1 minute default) and process the notifications every 20 seconds (instead of default every 10 seconds). |
Fabasoft app.telemetry allows customization of notification templates. The default notification template files are located inside the template sub directory of the installation directory:
Those template files support some substitution variables that will be replaced with the current value for that notification. These variables are written in the templates with following escape-syntax: "<%VARIABLENAME%>" (e.g.: <%TIMESTAMP%>).
Status-Change Notification Templates:
The following substitution variables exist for status-change notification templates:
Example: Status Change Notification Template (commandlinetemplate.txt) |
---|
<%NOTIFICATIONCLASS%> "<%NAME%>" changed to <%STATUS%> Fabasoft app.telemetry notification Message The status of <%NOTIFICATIONCLASS%> "<%NAME%>" changed from <%PREVIOUS_STATUS%> to <%STATUS%>. Date: <%LOCALTIMESTAMP%>, <%TIMESTAMP%> <%SUBNODES%> Reason <%GROUP%> Service group "<%NAME%>" reported status <%STATUS%> <%/GROUP%><%SERVICE%> Service "<%NAME%>" on agent <%HOSTNAME%> reported status <%STATUS%> <%/SERVICE%><%CHECK%> Service Check "<%NAME%>" reported <%VALUE%> <%MESSAGE%> <%/CHECK%><%SERVICEPOSTFIX%> -- <%/SERVICEPOSTFIX%><%GROUPPOSTFIX%> --- <%/GROUPPOSTFIX%><%/SUBNODES%> |
Escalation/Feedback Notification Templates:
The following substitution variables exist for status-change notification templates:
Since product version 2014 Fall Release you can also customize the notification e-mail subject by means of replacing the predefined <SUBJECT>-tag with a custom template value consisting of raw text in combination with any desired other property value.
Here is an example based on feedback forms having a field with name “Message”:
<title>Feedback via form <%FORMNAME%> from user <%FROM%> with message: <%PROPERTY%>Message<%/PROPERTY%></title>
Example: Escalation/Feedback Notification Template (escalationmailtemplate.html) |
---|
<html> <title><%SUBJECT%></title> <style type="text/css"> </head> <body> <h1>Fabasoft app.telemetry feedback notification</h1> <h2>Message</h2> <h2>Date</h2> <h2>Application</h2> <h2>Sent by</h2> <h2>Sent from</h2> <h2>Feedback Infos</h2> <h2>Open/View Feedback Session</h2> </body> <%ADD_FILES_AS_ATTACHMENTS%> |
Fabasoft Folio object addresses (a.k.a. COO-addresses) are exact but quite meaningless in respect to the character of the object they represent. It is mainly for the sake of optimization that the Fabasoft app.telemetry instrumentation of Fabasoft Folio uses the 64-bit integer representation of the addresses to pass object identity information. Whereas the conversion to the "COO-Address" format has been coded into Fabasoft app.telemetry, a more user friendly way of presenting Fabasoft Folio objects is still available.
Mapping of addresses to Names and References:
Providing an XML file containing a mapping from object address to names or references Fabasoft app.telemetry can represent Fabasoft Folio addresses in human readable format to help users to interpret recorded request information more easily.
Generating the mapping file:
In order to generate the mapping file the "Integration for app.telemetry Software-Telemetry" Software Component provides the XSL Transformation file FSCAPPTELEMETRY@1.1001:GenerateLogAnalyzerData. Calling this XSL Transformation by a script or by a Fabasoft Expression you receive an XML file containing the addresses and names of the following object classes:
To generate the mapping file start a command line (cmd.exe (Microsoft Windows) or bash (Linux)) on a Fabasoft Folio Server of your domain with a Fabasoft Folio service user account having permissions to read all objects, set the HOST and PORT variable to point to the Fabasoft Folio backend service and execute the following command (call the fsceval command in one line):
On Linux systems the default service user account is fscsrv and the default port of the Fabasoft Folio backend service is 18070. The fsceval binary is located under /opt/fabasoft/bin/ but should already be available via the PATH-variable without an absolute path.
Run fsceval (on Linux) to generate address resolution mapping file. |
---|
su – fscsrv HOST=localhost PORT=18070 fsceval -eval "coouser.COOXML@1.1:XSLTransformObject(coouser, |
On Microsoft Windows systems you should be logged in with an administrative account (of Fabasoft Folio). Setting the HOST (default: localhost) and PORT (default: 18070) environment variables is optional and not required for a default installation.
Run fsceval (on Microsoft Windows) to generate address resolution mapping file. |
---|
fsceval.exe -eval "coouser.COOXML@1.1:XSLTransformObject(coouser, |
Note: In earlier Fabasoft Folio or Fabasoft eGov-Suite installations the component name was FSCAPPLSTRUDL@1.1001 instead of FSCAPPTELEMETRY@1.1001.
The result is a generated fscdata.xml for your domain:
Syntax of fscdata.xml mapping files |
---|
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <Objects> <Object id="COO.1.1.1.2500" name="Fabasoft Folio/Base" reference="ComponentBase@1.1"/> <Object id="COO.1.1.1.9285" name="Fabasoft Folio/Folio" reference="ComponentFolio@1.1"/> ... </Objects> |
In earlier versions of the XSL-transformation script shipped with the app.telemetry Software Component some more additional properties (<Attributes>-sublist and <Methods>-sublist below the <Object>-tags) have been generated which are not used for the address resolution and name mapping and can be skipped. The only required entries in the mapping file for name- and reference-resolution are the <Object …/>-tags. To remove those not needed old sublist elements you can use grep to exclude all <Attributes>-sublist and <Methods>-sublist entries:
Exclude not used attributes from mapping file (optional) |
---|
grep -v "<Attribute" fscdata.xml | grep -v "</Attribute" | grep -v "<Method" | grep -v "</Method" > fscdata-small.xml |
Set up Fabasoft Folio address resolution for Fabasoft app.telemetry:
The Fabasoft app.telemetry web browser client receives the formatted values from the Fabasoft app.telemetry web service, which is therefore responsible for the formatting of the addresses. This implies that the mapping file has to be stored in the configuration folder on the web service under the following path:
Restart the Fabasoft app.telemetry Worker service to read the new content of fscdata.xml.
Since Version 19.1 you can upload a new fscdata.xml file in the Fabasoft app.telemetry web browser client using the “Upload Mapping” action in the Application view. The existing fscdata.xml will be replaced and immediately applied without a restart of the Fabasoft app.telemetry Worker service.
Since Version 19.1 the upload of the fscdata.xml is also supported using an HTTP POST request.
Upload fscdata.xml using curl |
---|
curl -u username:password --header "content-type: application/xml" --url http://localhost/apptelemetry/server/UploadMapping --data-binary @fscdata.xml |
The decision whether the COO-address is mapped to the objects name or reference is defined in the log definition of the corresponding log pool by the column “format” with the values:
In special situations one Fabasoft app.telemetry server may be used to monitor multiple Fabasoft Folio domains (e.g. test domain and production domain). Currently the app.telemetry server only supports one global address resolution mapping file (as described in the main chapter).
The solution to get Fabasoft Folio object addresses of different domains resolved together is to merge the separate mapping files (generated for each Fabasoft Folio domain) into one single mapping file.
Or just copy all plain <Object>-entries without any surrounding container-tags into 1 single file with the syntax shown in the following example:
Syntax of fscdata.xml mapping files |
---|
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <Objects> <Object id="COO.1.1.1.2500" name="Fabasoft Folio/Base" reference="ComponentBase@1.1"/> <Object id="COO.1.1.1.9285" name="Fabasoft Folio/Folio" reference="ComponentFolio@1.1"/> ... </Objects> |
Note: Be careful with the file encoding – ensure to edit and save the file with valid encoding (UTF-8).
Duplicate <Object>-mapping definitions may occur in the merged file but doesn’t matter. The first <Object>-definition for an id (COO-address) is used to resolve the entry.
Since Version 2022 UR2 the command line supports uploading or merging the fscdata.xml file with the following command:
Upload Mapping file using Command Line |
---|
apptelemetry mapping upload fscdata.xml or apptelemetry mapping merge fscdata.xml |
When merging mappings, the entries will replace existing entries and the merged fscdata.xml file will be updated on the server.
With 2022 November Release Folio provides a web service URL to download the fscdata.xml form a Folio webservice:
Download fscdata.xml from Folio Web Service |
---|
curl -u username:password -o fscdata.xml \ https://folio.mycompany.com/folio/apmgeneratemapping/fscdata |
Incremental update can be a acquired passing a valid timestamp as an optional changedat parameter as in:
Download incremental fscdata.xml from Folio Web Service |
---|
curl -u username:password -o fscdata.xml \ https://folio.mycompany.com/folio/apmgeneratemapping/fscdata?changedat=2022-10-22 |
With the native Software-Telemetry module for Internet Information Services (IIS) Fabasoft app.telemetry can log each HTTP-request with some important parameters to a separate Software-Telemetry log pool or the data is shown as extension module in the requests of an involved log pool of another web application. The module shows the start-, and end-time of each request, the time needed for authorization and execution and additional request parameters
Configuration:
To install the module on Microsoft IIS7 web server follow these steps:
Enable the module for your web application:
Note: You can enable the module only for your web application or for the complete web site, but be careful not to enable the module on two configuration layers because this will lead to a duplicate error.
Enable context transitions for Browser-Telemetry:
The IIS-Software-Telemetry module supports End-2-End Software-Telemetry by providing a context in a session-cookie. Use the following steps to enable this feature:
Since Fabasoft app.telemetry 2012 Spring Release log pools and log definition columns have been extended to be more powerful and flexible than before.
Log definition columns can be defined as generic/dynamic columns based on other columns obtaining their value by means of evaluating a calculation formula.
Possible calculation types are:
Categorize Value
To categorize an existing log definition column value, decide which parent/base column you want to split up into reasonable value categories. This column is defined in the new column definition as parent-attribute containing the name of the chosen existing column. You can also choose internal columns like the duration column.
The next step is to define the split points how to separate the value into the categories using the calculation-attribute. If you define 3 split points, you will get 4 categories: below the 1st split point, between 1st and 2nd, between 2nd and 3rd and above the 3 split point.
Then you can define textual labels for the categories using the format-attribute containing the keyword "enum:" followed by the number of the category, a colon (:) and the label text separated with a semi-colon (;) from the next category.
Last but not least set the flags of the new defined column including the flag for a CALC_VALUE_CATEGORY = 0x10000 (decimal=65536). If you want to define the column to be a dimension-column you also have to include that flag (0x0100 / decimal=256).
Syntax for Categorize Value |
---|
calculation="x1;x2;x3" format="enum:1:0-x1;2:x1-x2;3:x2-x3;4:>x3" parent="name of parent column" name="Category Column Label" flags="65792" |
Split Value (Regular Expression)
To split an existing log definition column value into sub parts, decide which parent/base column you want to split up. This column is defined in the new column definition as parent-attribute containing the name of the chosen existing column.
The next step is to define the regular expression splitting up the existing string into a new value using the calculation-attribute:
Last but not least set the flags of the new defined column including the flag for a CALC_REGEXP = 0x20000 (decimal=131072). If you want to define the column to be a dimension-column you also have to include that flag (0x0100 / decimal=256).
Example: Web Timing Dynamic Columns |
---|
<APMLogAnalyzerEntry name="Response Time Category" parent="duration" <APMLogAnalyzerEntry name="Protocol" parent="Page URL (referer)" <APMLogAnalyzerEntry name="URL Path" parent="Page URL (referer)" |
Since Fabasoft app.telemetry 2012 Spring Release a new database-based statistic feature is available which allows you calculate defined statistics on the available data at a defined time.
For example you can calculate and summarize all the requests from the last day every night and generate significant statistic charts.
Feature Details:
The new log statistics are based on the following components:
The Log Statistic object defines the following key facts:
The Top-X Logpool Statistic Chart defines the following key facts:
Configuration Details
The basic configuration can be done via the app.telemetry client (GUI) interface on the edit view.
Since Fabasoft app.telemetry 2012 Summer Release a new transport channel for app.telemetry agent/library communication is available.
Before this feature was introduced telemetry data could only be sent to the app.telemetry agent via a native library using shared-memory communication which limits application instrumentation on the supported platforms of the app.telemetry agent.
In order to extend the support for other platforms (for application instrumentation) we have introduced the TCP transport channel which can be used to transport the telemetry data from any Java platform (also with other hardware architecture).
Feature Details:
TCP transport channel is available for following app.telemetry libraries:
Configuration Details app.telemetry Agent
First of all you have to enable the TCP transport channel for any app.telemetry Agent in your infrastructure (default the agent does not listen for any TCP data).
Define the network port the agent should listen on for telemetry data in the app.telemetry agent configuration:
Restart the app.telemetry agent daemon/service.
Configuration Details for C/C++ Library
In order to tell the native C/C++ Software-Telemetry library to communicate via TCP transport (instead of shared memory) with an app.telemetry agent, start the instrumented application with the following environment variable:
Configuration Details for Java Library
In order to tell the Java Software-Telemetry library to communicate via TCP transport (instead of communicating with the native library on the local system) with an app.telemetry agent, start the instrumented application with the following configuration parameters:
Optionally you can define a log file for debugging purpose as Java system property:
With the 2011 Winter Release a Java instrumentation framework was introduced, which allows analyzing of Java applications based on instrumentation points dynamically added to the compiled java code on runtime. Using a definition stored at the Fabasoft app.telemetry Server, the application can be analyzed on a coarse or detailed level depending on the telemetry points added to the definition and by the level of detail you select to record.
Note: As of Fabasoft app.telemetry 2018, this feature is no longer supported and removed as of Fabasoft app.telemetry 2018 UR 2.
Java provides an interface named “Java Virtual Machine Tool Interface” (JVMTI), which provides certain methods and callback entry points to access data and to intercept processing of applications running on Java virtual machines. Fabasoft app.telemetry provides a native code Java agent library, which may be loaded at startup of the Java runtime.
Instrumentation
The Fabasoft app.telemetry JVMTI Library will intercept the load mechanism of Java classes to insert code fragments into those methods defined in the “Dynamic Instrumentation” section of the log pool definition. When being executed these code fragments will call the Fabasoft app.telemetry library to record telemetry information in combination with the specified parameters and return values.
This instrumentation approach provides a moderate effort during startup time due to instrumentation of the methods selected to be instrumented, and a small overhead during execution time calling the telemetry library, which depends highly on the count of telemetry points recorded.
Sampling Mode
In addition to the instrumentation part, the Fabasoft app.telemetry JVMTI library provides a “sampling” mode, which is designed to provide a first chance overview over what takes long during application processing. This sampling mode analyzes the call stacks of all Java threads on a regular basis (default 10 milliseconds, configurable by command line parameters) to determine method executions that take long. Matching the call stack from the root to the previous interval, the sampling mode analyzer assumes that common entries will suggest a long runtime in the particular function. Although sampling may never be accurate, it provides a starting point for instrumentation and also a call hierarchy of method invocations suggested to be instrumented.
Application Registration
Fabasoft app.telemetry distinguishes application instances by application registration parameters provided by the application. These application registrations parameters will be used in ways to identify matching log pools or service objects and to clearly display the information source for particular instrumentation points during analysis. In conjunction with dynamic Java instrumentation the first usage of registration parameters is to determine the log pool where to take the dynamic instrumentation points from.
Fabasoft app.telemetry defines 4 Parameters to register an application with:
The “application name” and “application ID” should identify the system providing a specific service, whereas the “application tier name” allows distinguishing between multiple types of subservices and the “application tier ID” allows identifying individual service instances in an environment, where multiple instances of similar services are configured. Log pools are selected by a string match of any combination of these 4 parameters.
Analog to the mechanism of selecting the right log pool for to provide the dynamic instrumentation definition, the requests recorded by the instrumented application will show up in the request list of the particular log pool. See the configuration section on details how to set the application registration parameters.
Request Context
Services are commonly used in form of requests, either from a user or from other services. These requests may be as big as business transactions or as small as an RPC call to as base service. Fabasoft app.telemetry uses the term “request” as the unit of work performed on a single service. A request in the dynamic Java instrumentation is represented by the execution of one or more specifically marked functions. So the request starts at the beginning of the function and ends when the function returns. All instrumentation points executed on this particular thread will be recorded and displayed in the context of this request.
Instrumentation Point Definition
An instrumentation point definition represents all information, which the telemetryjvmti library needs to instrument a single method.
Each instrumentation point has a name, which should be recognizable by the user analyzing the recorded requests.
The “Package/Class” and “Method/Parameter” properties specify which method the library should apply the instrumentation to. Instead of the class name you may provide an interface name so the definition will be used for all classes implementing this interface.
Use the “Instrumentation Module” parameter to assign the instrumentation point to a module. Grouping instrumentation points into modules has the advantage, that you may easily identify the software layer (e.g. Database, Authentication, …) which is causing problems.
On runtime you select which level of information detail you want to record. A more detailed recording level will provide more detailed information on the cost of more data traffic and more impact on the performance of the application. With the “Log-Level” property you select in which detail level you choose to record this instrumentation point with. Every defined instrumentation point will always be inserted into the code, no matter what the current recording level is, because you may change the recording level in runtime or you may start a telemetry session with a more detailed recording level. But the first check the telemetry library does when a telemetry point has been triggered is to validate the log-level against the current recording level of the request, so the performance impact is minimized.
There are a number of flags, which can be set on each recorded instrumentation point:
During the instrumentation process of a method, code fragments may be inserted at the beginning and before the return codes of the function. The measurement flags declare which fragments to be used in which place. When no measurement is selected, only one instrumentation point will be inserted at the beginning of the method. If only the “Return Value” parameter is selected, an instrumentation point will be inserted at the end of the method with the return value logged as a parameter. To measure the duration two code fragments need to be inserted, one at the beginning and on at the end of the method. Selecting the “Duration” measurement will provide this.
As described above (chapter “Request Context”) one or more methods need to be flagged as a “Request Context” to tell the Software-Telemetry about the scope of a request. Additional code fragments are inserted to trigger a “CreateContext” at the beginning of the method and a “ReleaseContext” at the end when the “Start & Stop a request here” flag is selected.
You may select any input parameter of the function as “Method Parameters” so the value of is will be added as a parameter to the instrumentation point at the beginning of the method.
Load Fabasoft app.telemetry Java Agent Library
In order to activate Fabasoft app.telemetry for a Java application, the Java runtime has to be started with an additional command line parameter:
Syntax for Java Agent Startup |
---|
–agentlib:telemetryjvmti=<mode> |
Where the <mode> parameter may have one of the following values:
For many Java applications this can be set via the environment variable JAVA_OPTS which is regarded by many startup scripts (sometimes it is better to extend that environment variable instead of overwriting it):
Syntax for Java Agent Setup via JAVA_OPTS |
---|
<set/export> JAVA_OPTS='-agentlib:telemetryjvmti=<mode>' |
Additionally you can turn on logging for the app.telemetry JVMTI Java Library by setting the Java startup paramter:
-Dcom.apptelemetry.apm.logfile=<your-logfile-pathname>
Using “dynamicInstrument” will instruct the library to apply the dynamic instrumentation definition to the application so the application will be instrumented.
With “sampling” the library will – in addition to dynamic instrumentation – provide information gathered by analysis of thread call stacks on timed basis. Additional parameters may be provided in a comma separated way:
The first 4 options provide the application registration parameters and the interval option modifies the sampling interval if the library operates in sampling mode.
Usage Example for Java Agent Startup |
---|
java –jar myApp.jar |
Application Registration
Each application registers itself using the 4 registration parameters
You may provide theses parameters either as command line parameters in the -agentlib parameter or as environment variables
The environment parameters are considered only if none of the registration parameters is specified on the command line. The Application name is the only parameter that must be defined and it has a default value of “Fabasoft app.telemetry JVM TI Integration”. All other parameters are optional and can be defined according to your services structure. For every registered application a service object will be created automatically in the Fabasoft app.telemetry infrastructure, which is uniquely identified by the 4 registration parameters and the infrastructure id of the Fabasoft app.telemetry agent. As soon as an application has successfully registered, the registration parameters are selectable as filter values in the log pool definition dialog.
Sampling Mode
When sampling mode is active by specifying the “sampling” mode parameter, the Fabasoft app.telemetry telemetryjvmti library will analyze the state of the application every sampling interval (default 10ms, configurable by the “interval” parameter in the -agentlib command line parameter).
Once every minute the cumulative statistic will be transferred to the Fabasoft app.telemetry server. This statistic is available through the “Edit Log Definition” dialog on the “Dynamic Instrumentation” tab.
Pressing the “Sampling…” button will show a list of all methods that have been determined to take time. Select any meaningful entry and press “Add” to add an instrumentation point for that method.
There is another way to access sampling mode statistics. Select an existing instrumentation point from “Instrumentation Points for Dynamic Instrumentation” and press “Browse” will show a dialog that presents the context of the instrumentation point. The list “Current Method Invoked by the Following Methods (Callers)” contains all methods which have seen to be callers of the selected function during the sampling time. And underneath the “Selected Method” there is the “Current Method Calls Following Methods (Callees)” list, which contains the methods, which have been called by the selected method. Both lists may not contain the complete list of callers/callees since sampling is a statistic and not an exact measuring method. Navigate through the call stacks by double-clicking methods in either the callers or callees list and add instrumentation points using the “Add” button.
Edit the properties of the new instrumentation points by selecting the respective entry and pressing the “Edit” button.
Tomcat
The Tomcat instrumentation is based on Tomcat version 6 and covers the following interfaces:
The method Adapter.service is the main request context. During request processing the request is handled by a number of configured Valves, each calling the “invoke” method of the next Valve before leaving.
Liferay
The Liferay instrumentation definition has been developed for a Liferay version 5.2 on top of a Tomcat and defines additional instrumentation points covering the processing of the Liferay servlets.
The following classes will be instrumented:
JDBC
The JDBC instrumentation definition includes methods of the interfaces, which any JDBC driver implements. The instrumentation points cover the following interfaces:
The methods used to establish a connection to the driver and to execute SQL statements are instrumented at “Normal” level whereas the access to the ResultSet is instrumented at “Detail” level. Access to the data fields is not instrumented at all because of performance and security reasons.
Fabasoft app.telemetry counter checks are a very powerful possibility to monitor arbitrary values of different (foreign) systems. One of those (foreign) systems (from the app.telemetry point of view) is Fabasoft Folio and the internal data structures.
With shell scripts you have still the possibility to obtain some internal data from Fabasoft Folio and with app.telemetry counter checks you can monitor those values obtained by some scripts writing the results into text files.
Some of the internal data of Fabasoft Folio can be obtained by executing the utility program fsceval on a Fabasoft Folio backend server (running as Fabasoft Folio service user).
Note: Running these scripts as cron job may require some special environment handling:
This may sound a little bit complex but the following two examples will help you understand and use this powerful feature:
To monitor the count of free object addresses in a Fabasoft Folio COO-Store you may use the following expression and scripts:
1. Write the expression and save it to a file (objinfo.exp).
objinfo.exp: Expression for free addresses in COO-Store |
---|
// Check all stores and write free addresses to a file per store // specify target path here @writenumbertopath = "/var/opt/app.telemetry/status/"; @svcs = coort.SearchLocalObjects3(cootx, "COOService"); @objremaining = 0; for (@i = 0; @i < count(@svcs); @i++) { @Storelist = @svcs[@i].coosrvinfo.cooinfmaxobjids; for (@j = 0; @j < count(@Storelist); @j++) { @Storeagg = @Storelist[@j]; if (@Storeagg.cooinfmaxcoost.objclass == COO.1.1.1.440) { @objremaining = @Storeagg.cooinfavailobjids; @cont = coort.CreateContent(); @cont.SetContent(cootx, 1, 65001, @objremaining); @cont.GetFile(@writenumbertopath + "freeids_" + @Storeagg.cooinfmaxcoost.objname + ".txt"); } } } |
2. Write a script to get and update the value, save it as shell script and test it (running as Folio service user on a backend server).
Test expression using fsceval |
---|
su – fscsrv fsceval -nologo -file objinfo.exp |
3. Create new app.telemetry counter checks for each COO-Store to monitor the value from the status files and define the update interval and the warning/critical limits (for example: set a warning level for below 1000000 and a critical limit for below 100000) to be notified when the COO-Store is low on free object addresses.
In order to get notified before your Fabasoft Folio license expires just follow this example.
1. Write an expression like the following to get the days until your license will expire and save it to a file (fsclicense.exp).
fsclicense.exp: Expression to check days until license expires |
---|
@lics = coort.GetCurrentDomain().COOSWCLM@1.1:domainlicenses; for (@i = 0; @i < count(@lics); @i++) { @exp = (@lic.COOSWCLM@1.1:keyexpirydate - coort.GetCurrentDateTime(coouser)) / 3600 / 24; if (@exp < @expiryday) { @expiryday; |
Warning: In some situations you may not rely on the accuracy of the COOSWCLM@1.1:domainlicenses property of your current domain.
2. Write a script to get and update the value, save it as shell script and test it (running as Folio service user on a backend server).
Test expression using fsceval |
---|
su – fscsrv fsceval -nologo -file fsclicense.exp |
3. Extend your script by means of “grepping” for the desired value in the output and storing the result into an app.telemetry status file and configure a cron-job to call this update script periodically.
Update shell script (update-license-expiration.sh) |
---|
#!/bin/bash HOST=localhost LD_LIBRARY_PATH=/opt/app.telemetry/lib64:/opt/fabasoft/share/eval:/opt/fabasoft/share/eval/INSTALLDIR/Domain_1_1:/opt/fabasoft/share/eval/INSTALLDIR/Domain_1_1001 export HOST PORT LD_LIBRARY_PATH /opt/fabasoft/bin/fsceval -nologo -file /home/fscsrv/fsclicense.coo |
4. Create a new app.telemetry counter check to monitor the value from the status file and define the update interval and the warning/critical limits (e.g. warning below 30 days and critical below 5 days).
Fabasoft app.telemetry provides different strategies for managing data retention.
Larger amount of data is stored by the app.telemetry Server continuously by the following services:
In order to handle the increasing amount of data and prevent the disk from running out of free space you can configure automatic retention time periods within the app.telemetry client. For more details read the sub chapter “Cleanup Strategies”.
Note: Before starting to delete any data you should export Software-Telemetry server sessions and feedbacks by a special automatic app.telemetry server task (configuration setting) in order to access the request details of such sessions later on.
In most situations feedbacks should be available for a much longer time period than standard request detail data which will be held for post-problem analysis for some time. Therefore you can split off the detail data required for the feedbacks from the normal rawdata directories.
This feature will be automatically enabled after updating to version 2014 Spring Release or later unless it is explicitly disabled in the server configuration file. If you want to change this setting or the session export path stop the app.telemetry server daemon then open the configuration file of the app.telemetry server (/etc/app.telemetry/server.conf) and setup and activate the configuration parameter “SoftwareTelemetrySessionPath”:
# The SoftwareTelemetrySessionPath property defines the target location
# for extracting reported telemetry sessions from the raw data files. (optional)
# The default path is: /var/opt/app.telemetry/server/sessions
# To disable automatic session extraction uncomment the line below (set to empty)
SoftwareTelemetrySessionPath /var/opt/app.telemetry/server/session
After this configuration is activated you can start the app.telemetry Server daemon again.
After a while the server will start processing all available existing telemetry sessions and export them to the configured directory. This process may take some time depending on your infrastructure and on the number and size of reported sessions/feedbacks. You can watch the progress of that action by the increasing directory content size on the file system. This process is an ongoing process that will also export new incoming feedbacks a couple of minutes after they have been fully completed.
Within the Fabasoft app.telemetry client you can define different automatic data deletion rules in order to keep essential data for a defined time period but prevent filling up the disk with old data not required any more.
The most cleanup settings can be configured within the global “Server Properties” dialog (in edit view at the top of the infrastructure tree).
You can either limit the retention of the data by time as number of days. If you activate this cleanup rule, any data that is older than the defined time range will be deleted a non-deterministic time span later (please be patient after applying the changes and give the server some time to process the cleanup).
The other possibility to limit the amount of data (only available for file system based data) is to set a data size limit in gigabytes (GB). But be careful this limit is only an estimated size limit and can vary a little bit.
You can define a single type of limit for every kind of data or even both limits, which mean if your data match one of the two criteria the cleanup will be triggered to reduce the amount of data to fit the all criteria again.
The data retention for the Software-Telemetry request detail data (rawdata on filesystem) is used to reduce the big-sized data of old requests. It is your choice how long you want to keep request detail data for a detailed problem cause analyze (request overview, request details, request statistics, request grades) and depends on the amount of available disk space.
Warning: you should have exported the reported telemetry sessions/feedbacks as described in the last chapter otherwise those session details will not be accessible!
For long-term analysis of your applications you could still use the activity statics if you keep those data for a longer period than the request detail data.
Additionally you can configure a database retention time range for all log pools as parameter on every log pool configuration dialog:
If you activate the database data retention for a given time range (in days) the following data will be automatically deleted from database tables belonging to that log pool:
The Status change data is required to show a table containing the history of all status changes of every service/counter-check. This history data is available for the time range defined in the cleanup settings of the “Server Properties”.
Service checks with a defined SLA-definition are persisted independently and unlimited in a database defined within the SLA-definition.
Counter checks with a defined database for persisting the counter results are also stored on the defined database. On the “Data Cleanup Settings” page of the “Server Properties” you may specify a “Data Expiration (days)” value to cleanup all counter values that are older than the given number of days (this option is available with Fabasoft app.telemetry 2015 Rollup 2).
Analyzing performance issues in requests – especially in distributed applications like Fabasoft Folio – is a time consuming task requiring a lot of application specific knowledge. To simplify this process, a set of rules is being created to identify common issues.
There are common rules applying to any Fabasoft app.telemetry instrumented application and rules specifically written for dedicated Products like Fabasoft Folio. In those rules common problems are identified by the telemetry data included in the request. Designing those rules is an evolving process where analyzing steps originally processed manually are being formalized and automated.
In order to use these rules for analyzing telemetry requests, open a request on the telemetry view and select the new “Grades” tab in the bottom analyzer area.
This rule applies to any request generated by Fabasoft app.telemetry instrumented applications and simply checks, if all processing threads, which occur in this requests were correctly terminated.
There are several reasons why a request could be detected not to be finished:
The Fabasoft Web service communicates with the Fabasoft Backend Services using Remote Procedure Calls (RPCs). Each call requires at least one network roundtrip and the allocation of a backend service thread. Issuing too many calls will result in a delay mainly caused by the network latency. Replacing many small RPCs by fewer larger ones will save roundtrip time and management overhead on client and server side.
The grade of the rule will reflect the potential benefit of an improvement based on the fraction of time consumed by RPC requests in proportion to the total request time.
Thus the main info provided is the count and the duration of all RPCs executed. In addition the duration is split in the communication time and the execution time base on the time difference between the requests on the Fabasoft Kernel side and the execution on the Fabasoft COO Service side.
Especially when a high communication time is indicated, the COO Service RPCs are worth being further analyzed. Assuming that the Web Server and Backend Server are located in reliable and fast network infrastructure high communication time results most likely from a high number of RPCs. Each RPC takes at least half a millisecond overhead for the network to transfer the request, the COO Service to schedule the request to an available worker thread and to transfer the result back to the Web Service. So a high number of RPC requests directly lead to bad performance without having a bottleneck in any single application tier.
In the details section an RPC statistic based on RPC type is being provided indicating how the different RPC types contribute to the total RPC time and count.
The most common problem in this area is the so called Typewriter, which can be determined by a high “Request Count” in the “COOSTAttrSel” RPC, which is the RPC requesting object information from the COO Service. The typical source for that situation is a loop iterating over a list of objects without previously loading the required attributes of all these objects in a single call. So any access to an object will require the Kernel to load the object one by one. While this will produce the correct result, it will lead to multiple RPC requests and therefore to bad performance. To optimize the Typewriter scenario requires a call to coort.LoadAllAttributes/LoadSpecificAttributes providing the list of all objects being iterated.
The list of the “Top 10 events” issuing RPCs” may help you identifying the method containing the loop. This can be identified by the last Event before the count drops to a low value. Clicking on the Event will lead you to the detail view showing the instrumentation points recorded while processing this method.
This rule analysis how many object attributes are read and how much time is consumed to do this. The instrumentation points required for this analysis are only recorded in Debug mode. In less detailed recording levels this rule will show up as “N/A”.
As the GetAttribute variants are the usual way to access Fabasoft Folio objects, it is not an error to do so, but if accessing information takes a reasonable fraction of the processing time, it is still a good starting point for further investigations. So use the list of “Top events accessing object attributes” to identify the method, in the context of which many attributes are being accessed.
You may optimize data access either by caching or by calling GetAttribute once instead of iterating an attribute using GetAttributeValue. Also a call to HasAttributeValue with a subsequent call to GetAttributeValue can often be replaced by a single GetAttributeValue saving at least the overhead of an access check.
When looking at the “Attribute Access Statistics” you can determine the duration for a specific type of data access. Most interesting here is the fraction between the “Duration” and the “Self Time” where a high “Self Time” indicates, that the duration mainly results from the count of data accesses, whereas a low “Self Time” compared to the duration indicates cache misses either on the object itself or on objects required for the access check. Cache misses will result in RPCs fetching object data from the COO Service.
Click on the Call name to go to the “Selected Statistics Events” tab, sort by duration and try to solve the performance problem, when attribute accesses lead to COO Service requests.
The Fabasoft Client communicates with the Fabasoft Web services using http requests. Each call requires at least one network roundtrip and the allocation of a web service thread. Issuing too many calls will result in a delay mainly caused by the network latency.
The “Optimize HTTP Request” rule helps you determining why a request on the Fabasoft Web Browser Client is slow.
Based on that analysis you can focus on the part of the request, which has most influence on the request time.
Fabasoft COO-Services read and write object data from/to a relational database. Reading data is required in case of queries and when objects are currently not in the COO-Service cache. Writing data occurs every time objects are being created, changed or deleted. Object lock information is also persisted on the database.
The way how to optimize queries depends on the type of database statement:
Reading Objects (COOSTAttrSel):
Queries (COOSTQuery): Processing queries is executed in several phases:
The default authentication method for Fabasoft app.telemetry web browser client users is "Basic Authentication". Since 2011 Winter Release you may use https with client certificates as an alternative login method. The following guide explains how to configure “Certificate Authentication” for Fabasoft app.telemetry using an Apache webserver on a Linux system.
Prerequisites:
Configuration:
Details / Hints:
Implementation:
The object context required by the use case analysis has to be provided by implementing a wrapper to the GetObjectContext method. This method is called several times per request on the object a view or use case is operated upon. The wrapper should set the return parameters ctxobj, ctxtype, context and category, where ctxobj is the „container“ object (e.g. teamroom or file) of the current object, ctxtype is the class or type information (e.g. objectclass or file plan entry). The context is a string representing hierarchical context information (e.g. the file plan hierarchy, the syntax is currently unspecified and unparsed). The category should be set to the main document category.
Prerequisites:
Configuration: