If I understand the question correctly, a lot of this depends on how you implement the underlying data structure and ETL/ELT. In our case, we could easily end up with fact rows that had no corresponding dimension rows. For example, we could have had a Sales row that didn't have a corresponding Customer. To work around that, we would use a process in our ETL to insert an "inferred member" with a name of "unknown" and the actual source system ID. We'd then pass the surrogate key for that newly inserted member back to the ETL to insert into the Fact table. We'd then sweep back and fix those when we next ran the ETL for the dimension. We ran into a couple of learning opportunities along the way. The first approach was to use a generic "-1" value for *any* inferred row. That meant that we couldn't later correct that with the actual value. Another issue we hit was that using the standard lookup components would not *add* new rows to its cache. We had to account for that in the stored proc we used to insert the inferred member. If we didn't do this, we ran into multiple copies of that same row - one for each time the inferred member showed up in the source for that load. Once you have your inferred members loading correctly, the cube will pick them up appropriately. You probably need to handle populating your cube a bit better once you hit those areas, though (or at least the dimensions). Perhaps a job that notices when there are changes to the inferred member bit to trigger a rebuild or something like that.