...
Creation can be part of the creation of a new CXO -Cockpit application or the creation of a Source System within an existing application. In both cases the following wizard will appear:
...
- DB Type: We support 2 Database Types: SQL Server and Oracle.
Settings for SQL Server: - SQL Server: The SQL Server is the SQL Server instance where the HFM data are stored. Note: as a rule this is not the same database-server as used for the CXO-Cockpit databases.Consult your HFM System Administator to get these specs.
- Database: The name of the HFM database (usually something like FM, HFM, HYPPROD, ...)
- Application: The name of the HFM application from which we want to extract.
- Windows Authentication:
- Check the box if you want to access the HFM database by Windows Authentication: Note that not only the user of the Design Studio ('you') should be authenticated, but also the Windows account that is used as a Service account for the CXO-Cockpit services. Data extractions are carried out by our Agent Service:
The Windows account(s) must have read-rights on the HFM database. - If you want to use a SQL Server account: enter a username / password. Also this account can be a read-only user.
Settings for Oracle:
For an Oracle connection a TNS configuration is required on the CXO Cockpit application server:
Consult your Oracle DBA for more details. - Check the box if you want to access the HFM database by Windows Authentication: Note that not only the user of the Design Studio ('you') should be authenticated, but also the Windows account that is used as a Service account for the CXO-Cockpit services. Data extractions are carried out by our Agent Service:
...
- Load Set for a full data refresh. When executing this 'Base' load set (of which you can only have one!):
- All data in the CXO Cockpit fact-database is cleared;
- All dimension tables are cleared;
- The dimension tables in the fact-database are built up again
- The requested data-selection is extracted from HFM;
- This new data is transformed and copied to the fact-database in portions of one Scenario / Year / Period combination.
A full extraction only needs to be done when:- No extractions have been done yet
- Data in the past have changed
- New HFM metadata have been loaded
- You want to make an alternative selection from the HFM dimensions (more / less details from the Entity dimension, Account dimension, more Scenarios. In fact any action that requires rebuilding of the dimensions in the CXO -Cockpit application).
- Incremental Load Sets: these can be used to quickly replace a slice of data usually one Scenario, one Year and one Period. The CXO dimensions are not refreshed. Incremental Load Sets can be executed manually but can also be used for Real-Time Synchronization.
...
Tree-selection mode is the preferred mode because it requires the least amount of maintenance. For instance, if in HFM accounts are added or deleted they are automatically selected for CXO if their root node is selected.
WYSIWYG mode (mode 2)
(WYSIWYG mode is added for legacy reasons)
In this mode we use black font. Similar to the Entity dimension, the resulting Account dimension in CXO exactly looks the way you expanded the selection.
...
Name | Description and options | Possible values |
---|---|---|
FilterSourceData | To get the raw data from HFM, queries are constructed that get the data and dimension information from the right tables of the HFM database. If FilterSourceData is set to 0, we only apply a simple filter for including or excluding ICP details, but no filter for the Entities. Entities are then filtered once the raw data is stored in the CXO staging database. When FilterSourceData = 1 then we explicitly filter by Entity already within the raw-data query (WHERE EntityId in ....). In general, if more than ~30% of the entities is selected for inclusion ion CXO it is better to set FilterSourceData = 0 because the huge WHERE clause will slow-down the query. If less than ~30% is to be included then set FilterSourceData = 1. | 1 = True 0 = False |
PreAggregateAccounts | As explained earlier, selections from the Account Dimension are usually done on a high level (the root node, or a few levels lower). All descendants of a selected node are then selected as well. A potential danger of this approach is that complete substructures might be repeated while you don't see them in your selection. For example, a node like NetProfit can have a lot of descendants and each time this node is used in other KPI's potentially all these descendants are duplicated as well, resulting in a lot of duplicate fact records in the cube: By setting PreAggregateAccounts = 1 we will pre-aggregate the 2nd occurrence of NetProfit and give it the name ...NetProfit: From a data point of view, NetProfit and ...NetProfit are 100% equal. However, drilling into the Descendants of ...NetProfit is not possible. The advantage of this approach is that it keeps the Account dimension smal(ler) and it generates less records in the cube. The disadvantage is that if you drill into e.g. OtherInfo you will never reach members like InterestInc(Exp). | 1 = True 0 = False |
WYSIWYGModeForCustomDimensions | As mentioned above, Custom dimensions (A01, A02, ... in CXO Cockpit) can be generated in two ways.
| 0 = do not apply to any custom dimension Comma separated list of Custom dimensions you want to see in WYSIWYG mode (e.g., 1,3,4,7) Recommended value: 0 |
WYSIWYGModeForAccountDimension | See WYSIWYGModeForCustomDimensions (and the description of the Account dimension) | Recommended value: 0 |
UnaryOperatorsInAccountDimension | New parameter since 6.2. It is highly recommended to set this to value 0 (False). This is the default value for version 6.3.2. For older versions it should be set to 0 manually. When set to False, this parameter ignores the Unary Operator field of the Account dimension in CXO. That means that all numbers are always rolled up with a '+' in the parent accounts. During the extraction process 'Negative' accounts are multiplied by -1 to ensure correct values for the parent accounts. Omitting Unary Operators makes the dimension faster (from an MDX perspective). The only reason to set / keep this parameter to 1 is when in the HFM chart of accounts members of type FLOW are accumulated in parent members with type EXPENSE. For versions lower than 6.3.2, setting/keeping this parameter to False (0) must be accompanied by removing the following (yellow marked) tag from the XMLA definition of the Account dimension (ACC) (right-click the dimension, select script dimension as / alter / to new query window and press F5 after removal): For version 6.3.2 or higher this is not needed | Recommended value: 0 |
UnaryOperatorsInEntityDimension | New parameter since 6.2. It is highly recommended to set this to value 0 (False). This is the default value for version 6.3.2. For older versions it should be set to 0 manually. Like with Accounts, when set to False we ignore the Unary Operator of the Entity Dimensions and do a simple roll-up of Entities in their parents. In cases where this will lead to a wrong value on parent-level, we put a compensation value on the parent entity. This is also done to speed up the query times. For versions lower than 6.3.2, setting/keeping this parameter to False (0) must be accompanied by removing the following (yellow marked) tag from the XMLA definition of the Entity dimension (ENT) (right-click the dimension, select script dimension as / alter // to new query window and press F5 after removal): For version 6.3.2 or higher this is not needed. | Recommended value: 0 |
fetchSizeOracleQueries | This is a technical setting with a default value of 0 (= don't use). The parameter can be set to a value > 0 (typically 500,000 or 1,000,000) to break the data-extraction in smaller pieces. This can be useful if data is extracted from a cloud-server with an Oracle database or in case the CXO application server has limited RAM. It can also be used for (remote) SQL Servers: in that case the parameter quantifies that no of records copied in one copy-action. | In case of on-premise database server: Recommended value: 0 In case of cloud-based database server and failing or extremely slow extraction: Recommended value: 500,000 - 1,000,000 |
UseTabLock | When set to 1: the tables in the hfmexpressstaging and fact databases are proactively locked before updating or writing data. This may boost performance. Ignore this by setting the value to 0. | Recommended value: 1 |
multiPeriod | When set to 1: for one Scenario / Year combination all to be extracted periods are retrieved with a single query. This can considrably boost the extraction speed. A potential - but unlikely - drawback is because bigger data-objects are loaded in memory, CXO application servers with limited RAM (8 Gb) can slow down. When set to 1, periods will be extractes one by one. | Recommended value: 1 |
...