Database Optimization Settings
Database Optimization during the build process
If you have a large database or are using a SQL database, doing a build might be a lengthy process. There are ways to set preferences so that the builds take less time, and you will still retain the data in the same way.
The Database Menu
Select View Config and then select the Database menu and then you will see the menu bar with information about the database, under the headings of General, Server, General Tuning and Sizes Tuning.
This section is where you can set the length of time between the updates. The default is not selected, with no days noted. Select the box, and then you can put in a number for seconds, minutes, hours or days.
The Server section allows you to set the server type, the database directory, the SQL table name prefix and suffix. The server type can be set to internal, MySQL, MS SQL (via ODBC) or Oracle (via ODBC). The database directory refers to the location of an internal database. If this is left blank, Sawmill will store the database in a directory with the same name as the profile, within the LogAnalysisInfo directory. The prefix and suffix SQL table names refers to labeling the table, so that it can be shared between multiple profiles.
This section allows you to change settings that will allow your database to build more quickly and efficiently. The default settings are with three of the items checked: update/build xref tables, build cross-reference tables, keep itemnums in memory. We will go through each of these items, as they pertain to the build performance.
The first selection is build all indices simultaneously, this is done after the log data is processed. By turning this selection on, the index building will be done all at the same time, which will greatly speed up the index build, but much more memory will be needed to do this build.
The next selection has to do with building xref tables after updating or building databases. If this is kept turned on, as in the case of the default setting, the xref tables will be built or updated after building or updating the main table of the database. This will slow the update or build of the database. However, if the cross reference tables are not up to date, then they will need to be built before a report is generated. If you want your reports more quickly, then leave this option on. If you have a very large database or SQL database, turn this option off for better database build performance.
The next section has to do with building all cross reference tables simultaneously, and this is similar to building all indices simultaneously. If this selection is checked, then the building speed will increase, but the process is memory intensive.
The next selection has to do with building cross reference tables in threads, and they will be merged in the end, to create cross reference tables for the main database. In the default setting, this is checked. If your system is fast and has multiprocessors, then it's best to leave this on. But if your system is slow, or has slower I/O, then turn this selection off.
This selection has to do with building indices in threads, and this affects multi-processor database builds. If your system has a fast disk I/O, it is best to leave this on, to use all of the processors. But if your disk I/O is slow, then it's best to leave this tturned off. If this is left turned off, then threads do not build indices, the indices are built during the final stage from the main table.
The next selection has to do with keeping itemnums in memory, rather than on the disk. This is selected as the default. When this is selected, the itemnums are kept in memory. Turning this on makes most operations much faster, especially database builds. But this will require huge amounts of memory, if large itemnum tables are required. When this item is turned off, some speed can be recovered by using a large value in the itemnums cache size.
The last selection has to do with splitting SSQL queries across multiple threads to improve performance. If this selection is "auto" then the queries are split into as many threads as there are processing cores.