Wednesday, August 31, 2016

Using Temporary Variable in FIXParallel

While reading the blogs and forums on using temporary variables in Fix Parallel, I was under the impression that there is no change the way we use it. But I was getting following error when I used to use temp var the same way we used it in FIX command:

"FIXPARALLEL (line 99): Modify the VAR variable inside FIXPARALLEL command Rule"

 However after doing through the documentation and few testing, I found that it's not the case.

Following is the example how we use the temporary variable in FIX command:

VAR V_Period_Start = 20160101;
VAR V_Period_End = 20161231;
VAR V_TotalDays = 0;

FIX(@RELATIVE("Entity_1",0),
        "Budget",
        &WFBudYr
    )
    "Working"
    (
        V_TotalDays = @CalcMgrDateDiff(V_Period_Start, V_Period_End, "day") + 1;


        @RETURN(@CalcMgrDoubleToString (V_TotalDays),INFO);
)
ENDFIX;


Note above that we can declare the variable before the FIX and we can also initialize with a value while declaring. However this is not the case when using the variable (in similar way as above) in FIXParallel command.

Using FIXParallel it can be written as follows:

FIXPARALLEL(8,@RELATIVE("Entity_1",0),
            "Budget",
            &WFBudYr
    )
    THREADVAR V_Period_Start;
    THREADVAR V_Period_End;
    THREADVAR V_TotalDays;

    "Working"
    (
        V_Period_Start = 20160101;
        V_Period_End = 20161231;
        V_TotalDays = @CalcMgrDateDiff(V_Period_Start, V_Period_End, "day") + 1;
       
        @RETURN(@CalcMgrDoubleToString (V_TotalDays),INFO);
)
ENDFIXPARALLEL;


As you can see, when using temp variable within FIXPARALLEL, it must be declared inside the FIXPARALLEL block with the keyword "THREADVAR". Also note that you cannot initialize with any value while declaring the variable.  

Monday, May 9, 2016

Do away with aggregation in BSO by using ASO plan type in Planning Application



In my Hyperion Planning implementation experience, I have worked more on implementation of customized Workforce Planning for various clients than any other module and it has always been a challenge when it comes to aggregating the data (in BSO application).
But this time with an option of ASO plan type and CDF to copy data from BSO to ASO, it has made things quiet interesting and easy to implement workforce planning.
I have been working on customized workforce planning where we have in total of 10 dimensions within one Hyperion Planning application.
Following are few details on number of members in various dimensions:
Employee – 150K members (based on various regions / Entities)
Entity – 18K
Level / Positions – 48K
Employee Categories - 19
As part of PoC, initially we developed an application without ASO plan type and the aggregation times ranged from 10 mins to 20 mins per region which was bit at higher side than expected. Not to mention that the page file size was in GBs (25 to 30+ GBs).
We then explored the possibility to have ASO plan type with the design such that most of the user input and level zero calculations were performed in BSO and the review / reporting at upper level will be done via ASO cube.
We used following artifacts to extract and move data from BSO to ASO:
1.       Data Export to export data to a file.
2.       Following CDF MaxLFunctions to execute MaxL which loads data in ASO using rule file.
RUNJAVA com.hyperion.calcmgr.common.cdf.MaxLFunctions
The above steps are performed in one Calc Mgr rule and it executes in less than a couple of minutes.
User can execute this rule based on user variables and see the data in FR report or via Smart View immediately at upper level from ASO cube.
Also, index and page file are less than few GBs combined.

Point to note that the "RUNJAVA com.hyperion.calcmgr.common.cdf.MaxLFunctions" may not work in Unix machines.
I was getting the following error:

Error: Cannot run program "essmsh": error=2, No such file or directory 


 and for it to work try the following:
In your OPMN.xml file, add the below line:

<variable append="true" id="PATH" value="$ESSBASEPATH/bin"/>

It should be as below:

<variable id="ESSBASE_CLUSTER_NAME" value="EssbaseCluster1"/>
<variable append="true" id="PATH" value="$ESSBASEPATH/bin"/>
</environment>


Restart Essbase service once for the changes to take effect.




Saturday, March 26, 2016

@CalcMgrDateDiff and @CalcMgrGetDatePart

@CalcMgrDateDiff:

Syntax:

@CalcMgrDateDiff(fromDate, toDate, date_part)

date_part options:

  • "day"
  • "month"
  • "week"
  • "year"

Example(s):

VAR V_Start = 20160701;
VAR V_End = 20170630;
VAR V_Diff = 0;

V_Diff = @CalcMgrDateDiff(V_Start, V_End, "day");


Returns 364

V_Diff = @CalcMgrDateDiff(V_Start, V_End, "month");

Returns 11

V_Diff = @CalcMgrDateDiff(V_Start, V_End, "week");

Returns 52

V_Diff = @CalcMgrDateDiff(V_Start, V_End, @NAME("year"));

Returns 1

Note that when there exists dimension "Year" you may need to put the date_part inside "@NAME" as you see in the last example.




@CalcMgrGetDatePart:

Syntax:


@CalcMgrGetDatePart(date,date_part_ex)

date_part options:

  • "dayofmonth"
  • "dayofyear"
  • "month"
  • "weekofmonth"
  • "weekofyear"
  • "year

Example(s):

VAR V_Start = 20160701;
VAR V_Num = 0;

V_Num = @CalcMgrGetDatePart (V_Start, @NAME("year"));

 

Returns 2016

V_Num = @CalcMgrGetDatePart (V_Start, "dayofmonth");

Returns 1

"BegBalance" = @CalcMgrGetDatePart (V_Start, "dayofyear");

Returns 183

V_Num = @CalcMgrGetDatePart (V_Start, "month");

Returns 7

V_Num = @CalcMgrGetDatePart (V_Start, "weekofmonth");

Returns 1

V_Num = @CalcMgrGetDatePart (V_Start, "weekofyear");

Returns 27

Monday, March 14, 2016

Understanding @IDESCENDANTS function

It is always good to know as how certain function works so that we can get desired, appropriate and expected results.

@IDESCENDANTS ("Zone"):

@IDESCENDANTS function populates all descendants including the specified member however there is a way by which it retrieves the data.

The following screenshot shows how @IDESCENDANTS populates members. Sequence indicate the order in which it populates the members during calculation.

Also, note that this is same as @IDESCENDANTS ("Zone",0)








@IDESCENDANTS ("Zone",-1):

Using negative integer for example -1, system will fetch / populate all members from level 1 upto the selected member. In short, it will exclude level zeros.

See the screenshot below:




@IDESCENDANTS ("East",3):

When you provide a positive number it takes as generation number relative to hierarchy and not the selected member.

So based on the example shown in the screenshot, system will fetch all generation 3 members along with the selected members. 




Note that if the selected member's generation number is less than or equal to the parameter then system will only fetch the specified member.

For example, for @IDESCENDANTS("ENT_E5",2), system will only fetch "ENT_E5" member as generation number of "ENT_E5" is 3 which is less than 2 (parameter).

Tuesday, February 16, 2016

Loading Data in Essbase: Have Sparse Dimension Members before Dense Dimension Members



Even after working in Essbase for quite some time, I have always learnt something new every time.

I got an extract from SQL to load data in Essbase. As far as I remember this was the first time when the format of the file was not provided by me or by my other team member. So, the columns received were in random order and I wanted to load data in Essbase. As expected there were few columns which were not required which I had to ignore during load.

I created a rule file and validated it successfully. When I tried to load the data, I got the following error in the error file for each record:

\\ Invalid number in data column

There were many account members as part of field and these were not assigned as data field.

I reviewed the data for each column but could not find any fault and then it was observed that one of the dense dimension column was appearing before the sparse dimension member and so the error. I moved that (sparse dimension) field before the dense dimension field and it worked.

Clearly, the error message was misleading as in many other cases.

This shows that when you unknowingly follow some standards we may not know until we experience it :)