Since there is a lot of information to be found on the terribly slow rebuilding of your reporting database I would like to provide you with the setup and findings I did while rebuilding the reporting database for one of our customers. I found a configuration/path whereas you will at least process 100k interactions per hour and most likely more. Whenever you want to test this on another environment than production (hopefully) I suggest you take a brief look into one of my earlier posts: Restoring or migrating your Xdb data.
- Restoring or migrating your Xdb data
- Rebuilding your reporting database with electric speed
- Rebuilding your Xdb index
Rebuild your reporting database with electric speed
To start of you need to create a blank/clean secondary reporting database. Name it like your primary reporting database but append the string '-secondary', for example: 'yourenvironment-acc-xp1-reportingdb-secondary'. Now import the corresponding .bacpac file that Sitecore delivers within their vanilla installation(s), as can be found in the download section over here. It is named 'Sitecore.Reporting.dacpac'.
When you are ready to proceed please recreate the contained user on the secondary reporting database by using the query beneath:
CREATE USER [reportinguser] WITH PASSWORD = 'XXX'; GO EXEC sp_addrolemember 'db_datareader', [reportinguser]; EXEC sp_addrolemember 'db_datawriter', [reportinguser]; GO GRANT EXECUTE TO [reportinguser]; GO
Now that you're database is ready for the initiation of a rebuild, please add a new connectionstring (in the connectionstrings.config) to the processing app service.
<add name="reporting.secondary" connectionString="Encrypt=True;TrustServerCertificate=False;Data Source=yourenvironment-acc-xp1-sql.database.windows.net,1433;Initial Catalog=yourenvironment-acc-xp1-reportingdb-secondary;User Id=reportinguser;Password=XXX;" />
Since there are a lot of options within the tweaking of your data aggregation I will save you the overview of all settings I have been through but instead will show you the configuration I have used to process 100k interactions an hour.
I think the most important step to start with is applying the hot-fixes mentioned within the following KB article: 'https://kb.sitecore.net/articles/224364'
Apply the following hot-fix to your processing App Service:
- Add the config file to 'App_Config/Include/zzz/Sitecore.Support.313948.config'
- Add the dll file to 'bin/Sitecore.Support.313948.dll'
- Run the .sql query to your primary and secondary reporting database
Apply the following following hot-fix whenever you use Sitecore 9.0.1. (other versions can be found within the KB 224364 article) :
Within this hot-fix there are two folders.
In the first folder:
\SC Hotfix 307942-1 Sitecore.Marketing.Operations.API 12.0.0 Xdb.ReferenceData.Service 1.0.1\
the .dlls found in this folder (Sitecore.Marketing.dll, Sitecore.Marketing.Taxonomy.dll and Sitecore.Xdb.ReferenceData.Service.dll) should be applied to each Sitecore role within your solution. In our case these are:
- Content Delivery
- Content Management
- EXM Dedicated Dispatch Service
In the second folder:
\SC Hotfix 307942-1 Xdb.ReferenceData.Service 1.0.1\
the .dll found in this folder (Sitecore.Xdb.ReferenceData.Service.dll) should be applied to all of your X-connect roles. In our case these are:
- Marketing Automation Operations
- Marketing Automation Operations - webjob (Automation Engine)
- Marketing Automation Reporting
- Xconnect Collection
- Xconnect Reference Data
- Xconnect Search Service (no need to append dll to Indexworker webjob!)
Since I ran into issues while rebuilding the reporting database, throwing me the following error:
System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. at System.Data.SqlClient.SqlTransaction.ZombieCheck() at System.Data.SqlClient.SqlTransaction.Commit() at Sitecore.Analytics.Aggregation.History.SqlRebuildProcessStateDataProvider.d__3.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Sitecore.Analytics.Aggregation.History.ReportingStorageManager.d__24.MoveNext()
I was suggested by Sitecore to apply additional time-out settings within the connectionstrings.config of the processing App Service, append these time out settings to the following database connections: reporting, reporting secondary and xdb.processing.tasks.
Add/append ';Connection Timeout=90;' to the corresponding connection strings.
The location to adjust settings within the configuration of our your processing server is:
In our case we only wanted to rebuild the reporting database (aggregating historical data) so we only increased the maximum history workers to '4'.
<history type="Sitecore.Analytics.Core.AsyncBackgroundService"> <param resolve="true" type="Sitecore.Abstractions.BaseLog, Sitecore.Kernel" /> <param desc="agentPath">aggregation/historyWorker</param> <param desc="interval" type="Sitecore.Analytics.Core.ConfigurationHelper, Sitecore.Analytics.Core" factoryMethod="ToTimeSpan" arg0="0.00:00:15" /> <param desc="maxAgents" type="Sitecore.Analytics.Core.ConfigurationHelper, Sitecore.Analytics.Core" factoryMethod="ToInt" arg0="4" /> </history>
Other than that we changed the maximum batch size values to '128':
<param desc="maximumBatchSize" type="Sitecore.Analytics.Core.ConfigurationHelper, Sitecore.Analytics.Core" factoryMethod="ToShort" arg0="128" /> <param desc="maximumBatchSize">128</param> <param desc="maximumBatchSize" type="Sitecore.Analytics.Core.ConfigurationHelper, Sitecore.Analytics.Core" factoryMethod="ToShort" arg0="128" />
Now over to the capacity of our Azure resources. We have scaled up the processing App Service to P2_V3 and scaled it horizontally to 2 instances. I have tested this with 5 and 10 instances as well, but kept receiving the 'ZombieCheck' exception mentioned earlier. Although I did not check this with the mentioned hot-fixes applied just yet. This is mainly the reason why I find it plausible to think that you might be able to process more than 100k interactions per hour.
The last thing before triggering the rebuild is to raise your SQL database to the standard tier with 800 DTUs.
Now head over to your CM, and press that start button - GOOD LUCK!
Once the processing has completed, rename your current primary reporting database with a (for example) suffix '_toberemoved' and alter the secondary database name to the naming convention used for your primary reporting database.
Do not forget to scale down your processing App Services and SQL database once you are finished.
After you have a new reporting database, you might see value in rebuilding your Xdb index as well. You might find my blog post regarding the Xdb index rebuilding useful. Click here.