Let's say we have pull in a DataSet 1 which includes all data to date.
We setup our defaul dimensions and what not at the component level so when the data loads they get this by default. We are allowing the user the ability to adjust these dimensions and then save the map so they can reload it at a different time. Which in general works, HOWEVER...
Let's say it is a month later and they load the new current DataSet 1 which has a month more worth of data. They then load the map they saved previously. We then start seeing problems. It seems the map stores a lot of data information (things that can be filtered) and it is possible the new DataSet 1 has new data for filtering and because we loaded the map saved from last month it is possible they won't see their data correctly.
What we have had to do is when the user goes to save the map (Assuming they aren't using filters) is we save the map when the dataset is empty so when they load that map on a newer dataset it doesn't confuse things.
Ultimately I guess I would like to know if there is a way around the quirks of saving a map and then using it with a newer dataset and not have data appear to be missing.
An even better example would be one query which is based on Job Number X. The user adjust the dimensions and saves the map. Then they do a query on Job Number Y and then they want that same map to be used so they load the map and you don't see your data properly. It is as if the Map has saved all the field data in the dimensions and prevents them from seeing the data with the new dataset that was based on a differnet Job. This explains why the map size is pretty large when we save it while the user has a large cube. If I use an empty dataset and save the map it is much, much smaller.
Also let's say the user does want to save the filter... Assuming the filter isn't on something that has already by filtered by a query like Job Number then we should be able to save the map and have it ONLY save the items that have been filtered and NOT all of the other possible fields. It would seem to me this would prevent causing problems with the data and then when the user went to adjust the filter you could then load all the current field data to give them the additional data they can filter on.
I don't know if this all makes sense or not. Just wanted to get your feedback and thoughts on this as we really would like one map to be used on many different identical queries that have already been filtered by the query itself before going into the pivotcube (By Job No). You may ask why not pull in all jobs and use the pivotcube to filter things... well not all users have security access to all jobs and the underlying data size gets MASSIVE with all jobs so it isn't feasible doing that.
Thanks for your time and input and I look forward to your response!!!
Greg