Essbase® Analytic Services Database Administrator's Guide | | Update Contents | Previous | Next | Print | ? | |
Information Map | |
This chapter describes how Analytic Services handles transactions and locking and other ways that Analytic Services protects data.
This chapter includes the following sections:
The information in this chapter is not relevant to aggregate storage databases.
When a database is in read/write mode, Analytic Services considers every update request to the server (such as a data load, a calculation, or a statement in a calculation script) as a transaction. Analytic Services tracks information about transactions in a transaction control file (dbname
.tct
).
The transaction control file contains an entry for each transaction and tracks the current state of each transaction (Active, Committed, or Aborted).
For a detailed description of the transaction process, see Understanding How Analytic Services Handles Transactions.
Isolation levels determine how Analytic Services commits data to disk. When data is committed, it is taken from server memory and written to the database on disk. Analytic Services automatically commits data to disk. There are no explicit commands that users perform to commit data blocks. However, setting the isolation level for a database defines how Analytic Services automatically commits data blocks.
Analytic Services offers two isolation levels for transactions-committed access and uncommitted access (the default). You can optimize data integrity by using committed access.
For an explanation of each type of access, see Committed Access and Uncommitted Access.
Note: The Spreadsheet Add-in lock and Send and the Grid API are always in Committed Access Mode.
Analytic Services issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues read (shared) locks for blocks that need to be accessed but not modified. By issuing the appropriate locks, Analytic Services ensures that data changed by one operation cannot be corrupted by a concurrent update.
This section discusses locks on data blocks, not locks on database objects. For information about locking and unlocking outlines and other objects, see Locking and Unlocking Objects.
This table explains the lock types:
This table shows the locks that Analytic Services issues for various types of operations.
How Analytic Services handles locking depends on whether committed or uncommitted access is enabled.
Committed access provides a high level of data consistency because only one transaction at a time can update data blocks. Under committed access, Analytic Services allows transactions to hold read/write locks on all data blocks involved with the transaction until the transaction completes and commits. However, you can still allow read-only access to the last committed data values.
Analytic Services provides options that determine when locks are issued on data blocks:
When you have pre-image access enabled, you are not limited to read-only access to data blocks; if you need write access to locked blocks, the transaction waits for write access or times out, depending on the wait or time-out setting. The transaction gets immediate write access to data blocks that are not locked by another transaction.
If you do not have pre-image access enabled and if you need read or write access to locked blocks, the transaction waits for write access or times out, depending on the wait or time-out setting.
Under committed access, note the following memory considerations:
Under committed access, Analytic Services locks blocks for read and write access:
Table 60 illustrates locking behavior under committed access when more than one transaction is contending for a lock on the same data. In the example in Table 60, transaction Tx1 is running, and transaction Tx2 is requesting access to the same data.
Note that access to locked blocks depends on what options are enabled. For a discussion of options, see Committed Access.
For information about how to set concurrency parameters, see Specifying Data Integrity Settings.
Occasionally under committed access, a situation results when two transactions are locking or waiting for access to the same blocks, and neither transaction can complete under these conditions. This situation called a deadlock.
For example, if transaction Tx1 needs to update first data block B1 and then data block B2, it first locks B1 and then attempts to lock B2. Meanwhile, if transaction Tx2 needs to update first data block B2 and then block B1, Tx2 first locks B2 and then attempts to lock B1. Tx1 locked B1 and is waiting for B2, and Tx2 locked B2 and is waiting for B1.
Analytic Services transactions periodically perform deadlock detection while waiting to acquire a lock. If detected, Analytic Services issues an error message, and the transaction aborts.
If you try to update a block that is locked by another user, Analytic Services behaves as follows:
For information about how to set concurrency options, see Specifying Data Integrity Settings.
Under committed access, if the server crashes, Analytic Services rolls back all database updates by transactions that were in progress when the server stopped. Thus, Analytic Services ensures that changes made by the aborted transactions are undone.
If a transaction is aborted due to a non-fatal error, all changes made by the transaction are rolled back.
For a description of the recovery process, see Recovering from a Crashed Database.
With uncommitted access (enabled by default), the Analytic Services kernel allows transactions to hold read/write locks on a block-by-block basis; Analytic Services releases a block after it is updated but does not commit blocks until the transaction completes or until a specified limit (a "synchronization point") has been reached. You can set this limit, as described below.
Concurrent users accessing the same data blocks might experience unexpected results under uncommitted access, because Analytic Services allows read-only access to data at its last commit point.
With uncommitted access, you can control when Analytic Services performs an explicit commit operation by specifying synchronization point parameters:
If you set Commit Blocks to 0, the synchronization point occurs at the end of the transaction.
If a user-defined threshold is exceeded during an operation, Analytic Services issues a synchronization point to commit the data processed to that point. Analytic Services performs as many synchronization points as are necessary to complete the operation.
Note: Analytic Services analyzes the value of Commit Blocks and Commit Rows during its analysis of feasibility for parallel calculation use. If Analytic Services finds the values set too low, it automatically increases them.
For information about how to specify synchronization point parameters, see Specifying Data Integrity Settings.
Caution: Analytic Services retains redundant data to enforce transactional semantics. Allow disk space for double the size of the database to accommodate redundant data, particularly if both Commit Blocks and Commit Rows are set to 0.
If your data cache is too small to hold the number of blocks specified in your Commit Blocks and Commit Rows settings, then note that blocks will be written to disk as soon as the caches become full, which will be before the transaction is committed.
Under uncommitted access, Analytic Services locks blocks for write access until Analytic Services finishes updating the block. Uncommitted access is in contrast to committed access, when Analytic Services holds locks until a transaction completes.
Table 61 illustrates locking behavior under uncommitted access when more than one transaction contends for a lock on the same data. In the example in Table 61, transaction Tx1 is running, and transaction Tx2 is requesting access to the same data.
Status When Tx2 Makes a Request |
If Tx1 holds read lock |
If Tx1 holds write lock |
---|---|---|
With uncommitted access, blocks are released more frequently than with committed access, when all blocks are locked until the end of the transaction.
Under uncommitted access, if the server crashes, Analytic Services rolls back all database updates from the point of the last successful commit. Some of the updates from an aborted transaction may have committed. Whether transactions committed their updates the way users expected depends on the order in which overlapping transactions updated and committed data.
If a transaction is aborted due to a non-fatal error, Analytic Services commits only the data that the transaction finished processing prior to the abort of the transaction.
For a description of the recovery process, see Recovering from a Crashed Database.
If Analytic Services is using parallel calculation, Analytic Services checks the commit threshold.
Consider these issues when choosing an isolation level:
Issue |
Explanation |
---|---|
Uncommitted access always yields better database performance than committed access. When using uncommitted access, Analytic Services does not create locks that are held for the duration of a transaction but commits data based upon short-term write locks. |
|
Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a database are more consistent. Also, only one transaction at a time can update data blocks when the isolation level is set to committed access. This factor is important in databases where multiple transactions attempt to update the database simultaneously. |
|
Uncommitted access provides better data concurrency than does committed access. Blocks are released more frequently than they are during committed access. With committed access, deadlocks can occur. |
|
If a server crash or other server interruption occurs while there are active transactions running, the Analytic Services kernel rolls back the transactions when the server is restarted. With committed access, rollbacks return the database to its state before any transactions began. With uncommitted access, rollbacks may result in some data being committed and some data not being committed. For information about actions to take when a transaction does not complete, see What to Expect If a Server Interruption Occurs. |
Analytic Services tracks transactions from start to finish, swapping data blocks in and out of memory as needed and committing data blocks when a transaction completes. The following list describes how Analytic Services handles a transaction: all list items apply to both committed and uncommitted access (see Understanding Isolation Levels).
Under uncommitted access, it is possible to access uncommitted data when multiple transactions are active and are accessing the same data. Transaction results are unpredictable under uncommitted access.
Under uncommitted access, if you have defined a commit threshold, Analytic Services may need to break down a single database operation into multiple synchronization points. See Uncommitted Access for information on commit thresholds.
You can specify isolation level, synchronization point parameters, and concurrency parameters using Administration Services, MaxL, or ESSCMD. Changes to isolation level settings take effect the next time there are no active transactions. For information about deciding which settings to choose, see Committed Access and Uncommitted Access.
To specify data integrity settings, use any of the following methods:
Tool |
Topic |
Location |
---|---|---|
To specify isolation level settings using ESSCMD, enter SETDBSTATEITEM 18 in ESSCMD and either follow the prompts or supply the required values on the command line.
Choose 1 (committed access) or 2 (uncommitted access, the default). Depending on the type of access that you specify, ESSCMD prompts you for other parameters (or you can supply the values on the command line).
If you choose 1 (committed access), ESSCMD prompts for the following information:
If you choose 2 (uncommitted access), ESSCMD prompts for the following values. See Uncommitted Access for explanations of these options.
You can also specify isolation level parameters (pre-image access and so on) by specifying parameters 19-22 on SETDBSTATEITEM. Enter SETDBSTATEITEM with no parameters; ESSCMD displays a list that includes each parameter by number, with a description.
Here is an example of using SETDBSTATEITEM to set an isolation level. This example enables committed access and pre-image access and specifies indefinite wait time.
SETDBSTATEITEM 18 "SAMPLE" "BASIC" "1" "Y" "-1"
For more syntax information, see the Technical Reference.
To specify isolation level settings using MaxL, use this MaxL statement:
alter database dbname enable committed_mode
For detailed information about the use of this statement, see the Technical Reference.
To ensure data integrity, the Analytic Services kernel temporarily retains redundant (duplicate) information. To accommodate redundant information, allow disk space for double the size of the database.
Analytic Services maintains a file called dbname
.esm
, in which it stores crucial control information.
Caution: The dbname
.tct
file, dbname
.esm
file, the index files, and the data files contain information crucial for data recovery. Never alter or delete these files.
To validate database integrity and to check for database corruption, use one of the following methods:
If errors occur during any of these checks, restore the database from backups. For an explanation of how to back up and restore a database, see Backing Up and Restoring Data.
The VALIDATE command performs many structural and data integrity checks:
Note: When you issue the VALIDATE command, we recommend placing the database in read-only mode.
As Analytic Services encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file name for error logging; Analytic Services prompts you for this information if you do not provide it. The VALIDATE utility continues running until it has checked the entire database.
You can use the VALIDATE command in ESSCMD to perform these structural integrity checks.
During index free space validation, the VALIDATE command verifies the structural integrity of free space information in the index. If integrity errors exist, Analytic Services records them in the VALIDATE log. The file that you specified on the VALIDATE command holds the error log.
If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt. You can rebuild in any one of three ways:
For an explanation of restoring a database, see Backing Up and Restoring Data. For a comprehensive discussion of restructuring, see Optimizing Database Restructuring.
Even if you do not use VALIDATE, Analytic Services automatically performs certain validity checking whenever a read operation is performed, to ensure that the index is properly synchronized with the data.
For every read operation, Analytic Services compares the data block key in the index page with the data block key in the corresponding data block and checks other header information in the block.
If Analytic Services encounters a mismatch, it displays an "Invalid block header" error message.
After a server interruption such as a crash, Analytic Services recovers a database, rolling back all transactions that were active when the interruption occurred. Recovery time depends on the size of the index. The larger the index, the longer it takes to recover the database.
Analytic Services also recovers and consolidates free fragments (unused addressable units in the data blocks). However, free space recovery is the most time consuming aspect of database recovery, so it is delayed by default. You must trigger free space recovery explicitly unless you have changed the default setting. See Free Space Recovery for the advantages and disadvantages of delaying free space recovery.
Analytic Services recovers data as soon as the server is started after a server interruption. Recovery consists of the following phases:
Note: Free space recovery is delayed until you trigger it, unless you have changed the default setting. See Free Space Recovery.
A media failure (faulty disk, disk failure, or head crash) requires you to restore data from backups. For an explanation of how to backup and restore a database, see Backing Up and Restoring Data.
Caution: Do not move, copy, modify, or delete any of the following files-essxxxxx.ind, essxxxxx.pag, dbname.ind, dbname.esm, dbname.tct. Doing so can result in data corruption.
The Analytic Services kernel uses fatal error handling to display appropriate messages and to shut down the server, depending on the error encountered. For an explanation of how fatal error handling works, see Understanding Fatal Error Handling.
For information about how transactions are rolled back after a crash, see Committed Versus Uncommitted Access.
Database recovery takes place any time you load an application that has just crashed or terminated abnormally. Analytic Services does not perform free space recovery automatically because it is the most expensive part of database recovery. You must either trigger free space recovery explicitly or change the default setting so that Analytic Services will recover free space automatically.
All database functions run normally whether you recover free space or not. The advantage of recovering free space is that you can reuse disk space in the data files that is marked as free. The disadvantage is that free space recovery is time consuming, so you might want to delay recovery until an advantageous time.
You should, however, perform free space recovery as soon as possible to take full advantage of the free space in the data files and to ensure that the database hasn't been corrupted. Also, if a database crashes repeatedly and you do not run free space recovery, the data files can become unnecessarily large.
To trigger free space recovery, use the MaxL alter database command, as follows:
alter database DBS-NAME recover freespace
See the Technical Reference for more information about this MaxL command.
To change the default behavior for free space recovery, change the DELAYEDRECOVERY configuration setting to FALSE. See the "Configuration Settings" section of the Technical Reference for more information about this setting.
To get information about free space recovery, use the GETDBSTATS command. GETDBSTATS provides the following information about free space recovery:
Free Space is Recoverable : true/false
Estimated Bytes of Recoverable Free Space : nnn
Note: If free space is recoverable, the block counters are estimates and do not necessarily match the number of existing blocks.
This table lists types of server interruptions and their results:
Table 64 shows what you need to do if a server interruption occurs during a transaction. How Analytic Services recovers from an interruption depends on the transaction isolation level setting (committed or uncommitted access). For an explanation of the recovery processes see Rollback with Committed Access and Rollback with Uncommitted Access.
Type of Request |
Recommended Action |
---|---|
If Analytic Services issues an error, repeat the last send operation.
If the spreadsheet has been lost or does not exist, and if you are using SSAUDIT spreadsheet logging, reload the |
|
Check the server and application logs to see where the calculation left off. For a review of methods, see Viewing the Analytic Server and Application Logs. Decide whether to start the calculation over. Repeat the last calculation. |
|
Complete one of the following actions:
|
|
Arithmetic data load (adding to or subtracting from values in the database) |
If the database is set to committed access, reload the data. (The transaction has been rolled back.) If the database is set to uncommitted access, the process is not as simple. Some of the data loaded. Therefore, if you reload all of the data, you receive incorrect results for the data values that loaded twice. Therefore, you must perform the following actions: |
The restructure is not complete. First, delete the temporary restructure files: |
Note: If the UPDATECALC parameter is set to FALSE, Analytic Services recalculates the entire database if an interruption occurs during a calculation. (The default value of the parameter is TRUE.)
For extra protection against data loss and for spreadsheet audit information, Analytic Services provides a spreadsheet update logging facility. Enable this facility by using the SSAUDIT or SSAUDITR parameter in the essbase.cfg
file on the server. You can specify SSAUDIT for all databases on the server or for individual databases. For information on the essbase.cfg
file and for syntax information on SSAUDIT and SSAUDITR, see the Technical Reference.
Analytic Services handles recovery under normal situations. However, sometimes you may want to load the spreadsheet update log manually. For example, if you have restored from a recent backup and do not want to lose changes made since the backup was made or you experience a media failure, you can recover transactions from the update log. To do so, use the Analytic Services command-line facility, ESSCMD, from the server console.
The following ESSCMD command sequence loads the update log:
LOGIN hostnode username password SELECT application_name database_name LOADDATA 3 filepath:application_name.ATX EXIT
To simplify the process of loading the update log, prepare a batch file as described in Using Script and Batch Files for Batch Processing.
When SSAUDIT or SSAUDITR is specified, Analytic Services logs spreadsheet update transactions chronologically. Analytic Services uses two files:
dbname
.atx
stores spreadsheet update transactions as a unit that can be used as the input source for data loads. dbname
.alg
contains historical information for each transaction, such as user name, date, and timestamp, and the number of transaction rows from the .atx
file.Both files are stored on the server.
The spreadsheet update log can get quite large; even if you are using SSAUDITR, Analytic Services clears the log only after you back up data. If spreadsheet update activities are frequent in an application, you may want to manually delete the log periodically.
When a database is started after a shutdown, if spreadsheet logging is enabled, Analytic Services writes the following message to the database log:
Starting Spreadsheet Log volumename\
application_directory\
application_name\
database_name
\
database_name.atx For Database
database_name
An example of the message follows:
Starting Spreadsheet Log \ESSBASE\app\app1\sample\sample.atx for database sample
To ensure successful spreadsheet update logging, stop and restart the application after either of the following:
Analytic Services ensures that if you enable spreadsheet logging, updates cannot take place without being logged. If Analytic Services cannot write to the update log for any reason, Analytic Services stops the transaction and issues an error message.
SSAUDIT and SSAUDITR are available only from the essbase.cfg
file.
Hybrid Analysis offers a means of integrating a relational database with a multidimensional database so that lower level members and their associated data remain in the relational database while upper level members and their associated data reside in the Essbase database. This option presents additional issues regarding data consistency and integrity.
For information on ensuring that the data is correct in all locations, see Managing Data Consistency.
![]() |