Hi All, I have been working at this for sometime to no avail. I hope the community can help. Here goes I have cube that ultimately measures compliance in an organisation Each compliance topic has a duration e.g. Equality and Diversity - 3 yrs or Fire Safety 2 yrs I would like to measure the compliance rates to show trends over time and predict rates for the next quarter etc. Here is the rub: Each topic undertaken is recorded once in the fact table for that topic's duration per person. If have done my Fire Safety in 2014-01-01 and it is for 2 years. Were I to slice my cube at day granularity in my date dimension to 2014-06-01 only completions on that day would be selected and the compliance rate would be incorrect (too low because 2014-01-01 would not be in the data set). Now, if I record my training in my fact table for every day for 2 year from 2014-01-01, when I select compliance for a month or a quarter then I will be selecting the compliance for each day. This will give me a compliance rate way over 100% as (every day of the month will be part of the monthly aggregate) This in a nutshell is my problem. How can I calculate my compliance without having a too low or too high compliance. My granularity is day. Any help would be greatly appreciated.
I agree with the notion of populating the fact table for each day of compliance. The only challenge would then be how to aggregate. A good approach would be to create some kind of unique identifier, which could include the compliance type and customer/person. A distinct count over this identifier should then yield the what you're expecting. Another approach could be to treat this like an inventory scenario, and create fact tables at the different levels of your date hierarchy (daily, monthly, yearly). You could then use scoped assignments to "pick" the measure from the appropriate table. Apologies for the shameless plug, but you can find some more information here: [Using Scoped Assignments with Periodic Snapshot Fact Tables] :