The Configuration of a Fabasoft app.telemetry instance is stored in an XML file and related encryption keys. These files are normally named “infra.xml”, “encryption.key” and “encryption.pem” and are located under the following directories
As it is possible to specify the filename of the configuration file using command line parameters of the services, you may have to look at the respective settings to locate the active configuration file.
Since Fabasoft app.telemetry 2015 Update Rollup 1 the communication between the Fabasoft app.telemetry services is secured using certificates. As all Fabasoft app.telemetry services only respond to known peers identified by their certificates, it is essential to backup at least the Fabasoft app.telemetry server certificate, which is used to contact all agents.
The certificates and the hashes of the trusted certificates are stored in the following folder
Up to four files – depending on the service – are stored in each folder:
Certificates are created on service startup if not existing and the respective trusts are established for all services located on the Fabasoft app.telemetry server. All other agents will accept the first app.telemetry server contacting the agent and will add the certificate hash to their trusted certificates. So restoring an app.telemetry server requires the server/cli_certificate.pem to be restored so that the agents will accept the server’s incoming connections. It is recommended to backup and restore all certificate-, trust- and fingerprint-files to make sure, that all trusts are established correctly.
Software-Telemetry data is stored on the file system and contains detailed information about the activities that occurred during request processing on the measured applications involved. So, this data is the basis for detailed request analysis.
You can backup and restore telemetry data by copying data on file basis.
The files are organized on a daily basis in folders that are named by the date when the data have been received by the Fabasoft app.telemetry server. The time base for this structure is UTC, so be careful to copy the correct folder for data analysis of specific time ranges.
Inside the daily folders Software-Telemetry data is structured in several subfolders that contain data of different aspects of the telemetry data. Most of these folders are required to be able to correctly load request data for data analysis so there is no way to reduce data significantly by skipping any of the subfolders.
The root folder of the telemetry data is
This default location may have been changed to place Software-Telemetry data onto a different data partition by
As telemetry data will never be changed after being written (with the exception of adding requests to running telemetry sessions and deleting sessions) it is save to copy the content of past days without stopping the app.telemetry server process. When copying the current day some files may be incomplete or locked by the app.telemetry server process.
If you move the active Fabasoft app.telemetry instance to another server, make sure to stop the Fabasoft app.telemetry server service before you copy the telemetry data of the current day. This is essential also because the telemetry data contain the latest request id in use. So, if you fail to restore data correctly you may run into duplicate request id problems.
Compression tools may help transferring data by reducing the count of files and the size by compression, but do not expect high compression rates, because the largest portion of the data –contained in the “rawdata” folder – is partly compressed internally.
Like on the source system copy the backed up files to
In case you want to put the data into a different folder, specify the location in the following configuration:
Request data recorded by Fabasoft app.telemetry may be written into a database. This request information is also the key information to access detailed Software-Telemetry data. So, make sure when backing up your data to provide also a backup of the particular time frame of the request data of the specific log pool tables.
There are several ways transferring database data to another system.
The most convenient way is to simply backup the database and restore it on another database server. This process is fully supported by database tools.
If you do not want to copy all of your data you have to extract parts of the data. You may either do this by copying data of single tables to backup files or you simply create a temporary database, fill this database with the data needed and use backup and restore procedures of the database system to transfer that database to the target system.
If you choose to selectively transfer request data you have to copy parts or all of the records of the selected log pool tables.
The name of a log pool table consists of the “Database Table Prefix” specified for the log pool (referred to as “prefix”) and one or more tables declared by the entries of the “Log Definition Columns” section of the log pool. If there are no entries in “Log Definition Columns” a single “request” table will be generated holding the default properties of each request. Concatenate the prefix and the “Database Table” name to get the effective name of the table on the database.
There are 2 kinds of data tables, the “Base Table” which will always be there and one or more optional “Additional Tables”, which hold data which may occur multiple times in a single request (e.g. the “query” table in a Fabasoft Folio Webservice log pool). It is sufficient to copy data of the “Base Table” only as a basis for the telemetry data analysis of the request, whereas “Additional Tables” data may help selecting the right requests or to identify common properties of problematic requests. “…stattime” and “…statvalue” tables represent aggregated statistics based on “Base Table” data. These tables and their content will be regenerated on demand, so they need not be copied to the target system.
You may backup all or only selected records of your base table. Common criteria for selecting are based on the id or the “starttime” column of the table. Whereas the id is the unique key of the row which is assigned by the Fabasoft app.telemetry server at the time when it started processing of the request, the timestamp is the GMT time of the device recording the telemetry data. As time synchronization between the devices in the infrastructure may not be accurate, this timestamp may not represent “natural” order of the requests, whereas the id represents an order in which the data have been received and processed by the app.telemetry server.
Only the “Base Table” includes the “timestamp” column, the “Additional Tables” connect to the “Base Table” data by the “id” column. So, if you transfer data from “Additional Tables” make sure to transfer all needed records by selecting all ids referred to by the “Base Table” entries.
If you export data selected by time range and you have “Additional Tables”, the most convenient way of selecting the data is to find a minimum and maximum request id of that time range in the “Base Table” and select all records of “Base Table” and all “Additional Tables” based on that id range, instead of selecting the “Base Table” data by “starttime” and joining to the “Additional Tables” by id. This procedure may not be 100% accurate but may be much faster.
The easiest way of transferring request data is to backup and restore the complete database. Use the tools of your database system to perform this task.
To back up a database use “Microsoft SQL Server Management Studio” or execute an SQL statement like:
Example: SQL Command |
---|
BACKUP DATABASE [apmdb] TO DISK = N'c:\temp\apmdb.bak' |
This will generate a single file containing all information stored in the database.
On the target system create and restore the database using the Management Studio
Use pg_dump and pg_restore to transfer databases.
E.g. to dump the whole database “apmdb” use
Example: PostgreSQL Command – Dump |
---|
pg_dump apmdb > apmdb.sql |
To restore that backup execute
Example: PostgreSQL Command – Restore |
---|
psql -d apmdb -f apmdb.sql |
You may export single tables only using
Example: PostgreSQL Command – Export |
---|
pg_dump -t '"APMwebQuery"' apmdb > apmwebquery.sql |
As app.telemetry may use mixed case table names, it is necessary to escape table names correctly like in the statement above.
A convenient way of transferring selected Database Data is to create an additional temporary database on the database system, copy the selected data to the temporary database and transfer this database to the target system using database system tools.
With pg_dump you can backup selected database tables. To backup selected rows of a table, you have to copy the date into a new table using “SELECT INTO” and backup this table.
Make sure to recreate the indexes on the tables for “Base Tables” on the target system using the following commands:
Example: SQL Commands – Create Index |
---|
CREATE UNIQUE INDEX "apmweb_pkey" ON "apmweb" (id); CREATE UNIQUE INDEX "apmweb_idx" ON "apmweb" (starttime, id); |
… and for „Additional Tables“ using:
Example: SQL Commands – Create Index |
---|
CREATE INDEX "apmwebquery_idx" ON "apmwebquery" (id); |
One way to transfer tables is to export table definition and table data to files, transfer these files to the target system and import them there.
When transferring data from one database to another you have to create the database table on the target system. To extract the table structure use “Microsoft SQL Server Management Studio” and select “Script Table as …” > “CREATE To …” > “File” from the context menu of the particular table and save the table definition to a file.
By default an app.telemetry log pool database table has two indexes, one of them is the primary key index on the id column. This index is already included in the table definition. The second is the index on starttime and id which you have to create manually on the target system. You don’t need to back up the structure of this index, it is more efficient to create the index after data has been restored on the target system. You may have created additional indexes for to support special query restrictions, so take care about indexes yourself when migrating data to another server.
Microsoft SQL Server provides a tool called “bcp” for “Bulk Copy” operations. With this tool you can easily copy all or parts of the records of a table to a file.
Example: SQL Command – BCP |
---|
bcp "select * from [apmdb].dbo.[apmweb]" queryout "apmweb.bcp" -N -S localhost -T -E |
Replace apmdb with your database name and apmweb with the name of your log pool table. The option “-N” directs bcp to output data in a native format, “-S” is followed by the database server name, “-T” means “Trusted Connection” and “-E” is there to keep identity columns.
As you specify a query in the command line you may also provide restrictions to the records being exported e.g. by specifying a time range.
The following query will export the entries with a starttime between Feb 18th, 2013 00:00 (UTC) and Feb 19th, 2013 00:00 (UTC).
Example: SQL Command |
---|
select * from [apmdb].dbo.[apmweb] where starttime between '20130218' and '20130219' |
Create a database using “Microsoft SQL Server Management Studio”.
Recreate the tables in the database by executing the scripts with the table definitions generated on the source system. Make sure to execute the statements in the proper database.
Use bcp to import the datafile into the table:
Example: SQL Commands – BCP Import |
---|
bcp [newapmdb].dbo.[apmweb] in "apmweb.bcp" -N -S localhost -T -E -b 10000 |
Recreate the secondary index on the table:
Example: SQL Commands – Recreate Index |
---|
CREATE UNIQUE NONCLUSTERED INDEX [apmweb_idx] ON [dbo].[apmweb] ([starttime], [id]) |