I am having trouble trying to improve processing speed for an MDX Query.This one is driving me bonkers so I’m hoping someone can help me.
Here is my MDX Query:
SELECT CROSSJOIN([Year],{[Measures].[Quick Ratio]}) ON COLUMNS, CROSSJOIN(DESCENDANTS([Department]),DESCENDANTS([GL Company])) ON ROWS FROM [GLRatios]
Year Count: 9
Department Count: 33
GL Company Count: 41
GL Summary Category ID.Category Count: 25
Quick Ratio is a Calculated Member calculated as follows:
'IIF(([GL Summary Category ID].[Category].&[Current Liabilities],[Measures].[Balance]) <> 0,(
(([GL Summary Category ID].[Category].&[Cash],[Measures].[Balance])
+([GL Summary Category ID].[Category].&[Accounts Receivable],[Measures].[Balance]))
/([GL Summary Category ID].[Category].&[Current Liabilities],[Measures].[Balance])) ,0)
'
The Result Cell set consists of 1430 rows and 11 columns.
Storage Method is MOLAP.I have designed aggregations.
1) When I run this query from the SQL Server Management Studio, it takes 23 seconds to run it the first time.The second time I run this MDX Query from SQL Server Management Studio, it takes 2 seconds to run. I would assume this is the result of caching.The first time I run the query after the cube has been processed, I see over 73,000 Query SubCube entries in SQL Profiler that look something like this:
EventClassQuery Subcube
EventSubClass1 - Cache data
TextData0000000000000000000000000000000000000001000000000,1,10,1
I would say over 99% of all of the entries in SQL Profiler contain the same TextData.Why is the MDX Query generating these entries, and why is it generating so many entries with that appear to be identical TextData values?Is it caused by something I have done when defining my dimensions, or a design flaw made while defining my Data Source View?Is this caused by using a Calculated Member as measure?
2) I have a C# web service that uses the ExecuteXmlReader method of the Microsoft.AnalysisServices.AdomdClient.AdomdCommand class to execute the same MDX query on the server and return it to the client as Xml.In other words I am rolling my own web service instead of calling the XMLA web service from the client.
When my web service call executes the MDX query, it uses ASPNET as user. Each time the MDX Query is executed via the web service it also generates >73,000 Query Subcube entries in SQL Profiler and takes 23 seconds to run.The MDX query generates the same SQL Profiler entries each time it is executed from the web service.It does not cache.
Is there something about the use of the ASPNET user that prevents caching of the data on the server? Is it a case of each ASPNET session having its’ own cached data?
Thanks in advance for your help.
Wendell G.
The large number of Query Subcube events is likely the result of getting cell values one at time. The TextData field only shows the grains of the queries, not the slices. There is a Query Subcube Verbose event which shows the slices as well. Using that event you should see that the grains stay the same but the slices change each time. Since the sub event is Cache data, it means AS server has pre-fetched all necessary cell values earlier on in a larger query and later on filters single cell values from the larger data cache. There was some performance improvement in the SP1 QFE rollup release which makes the cache lookup faster in certain cases. Without looking at the database it is hard to tell why AS server chose the cell-by-cell query plan, maybe the presence of IIF was the cause. You can try the connection string property Cache Policy=9 to see if it makes any difference in performance for this query. Management Studio doesn't support custom connection string properties, you have to use either the MDX Sample application from SQL Server 2000 or write your own C# application.
|||What does CachePolicy=9 do? The MDX Solutions book only lists Cache Policy values up to 7.
Thanks,
|||Sometimes AS server chooses a cell by cell calculation plan over a bulk evaluation plan. Setting Cache Policy=9 forces AS server to always use a bulk evaluation plan if one is available. This is a workaround in the rare cases where the AS decision hurts performance.
WARNING:
Generally speaking, setting Cache Policy=9 could cause severe performance degradation. Users should only use this for diagnosis purpose or occassionally boost the performance of individual queries. Microsoft does not recommend users to change this connection string property unless explicitly recommended by Customer Support after a full disgnose of customer database and exhaust all other means of improving performance.
|||Hey,
I've tracked the majority of my speed issues down to a single calculation. My [Measures].[Quick Ratio] is a calculated member that is based on a second calculated member:
SUM(PeriodsToDate([Time Periods].[(All)],[Time Periods].CurrentMember),Measures.[Beginning Amount])
This calculated member is very slow when I crossjoin Department and GL Company on rows. In the fact table of my database, there is at least one row for each combination of department and company, dated 1/1/2001, that contains Beginning Amount, even if it is zero. At the end of 2001, if the balance for Department and GL Company has changed as of the end of the year, I am storing the net change increase or decrease of the balance as Beginning Amount in a new row, dated 1/1/2002, and so on for each subsequent year. This way in order to get the beginning balance at beginning of any year all I do is sum the Beginning Amount up to that point.
A Sum of PeriodsToDate([Time Periods].[(All)] always provides the correct amount, no matter what time dimension I use (year, month, quarter) but it is very slow. Is there a more efficient way to get the right answer?
Thanks for any help you can provide.
Wendell G
|||There was a major performance improvement done in SP2 which addresses exactly this scenario. A public beta release is scheduled to come out in a few weeks.
No comments:
Post a Comment