For some people some things can’t happen soon enough and last week I saw two cases of that.
In the first I saw Neil Hambly’s presentation at a SQL Social event where he showed how indexed views can speed up known queries on large amounts of relatively static data by magnitudes which 10 years ago could never have been imagined. By caching, or materialising, a virtual table Neil showed how complex BI style queries can be reduced from hours or minutes to seconds or even milliseconds. This is great; it gives the ability to report ad-hoc on large complex data sets without having to wait for overnight ETL or cube rebuilding tasks, providing you know roughly the query in advance.
For cases where the underlying queries either can’t be written as indexed views or the data doesn’t even come from within SQL Server PowerPivot has also made querying on huge data sets in near-real time feasible. Within the familiar environment of Excel power data users can slice and dice hundreds of millions of rows, again in a way which only a few years would never have been dreamt of.
In summary, data querying technology has progressed so much in recent years that complex analysis on large data sets in near-real time is now feasible, and hopefully possible most of the time.
In the second case I saw the opposite – someone trying to reduce the time housekeeping tasks were taking and complaining when they couldn’t get them any lower. There were two issues here: the performance of the hardware being used meant the tasks were running as quickly as you’d hope for, but more importantly, the user didn’t actually have a valid reason for needing them to run any faster, he just wanted them to as he felt things could always be optimised.
When you design a system you should understand that some things will always take a certain time. Reporting on a sales history database might improve by using the technologies discussed above but backing up a 20GB database is never going to happen in seconds, no matter how much you optimised your code or server workload. The same acceptance should even be made for functional operations within an application. If placing an order has to complete 12 external tasks to succeed the functional owners should accept it’s never going to happen in milliseconds. Sometimes it is worth investing dev or DBA time in order to optimise code, reducing a query runtime from 120 seconds to 2 demonstrates a clear return on that dev time. However, is it really worth trying to optimise that code even further to make it run in 1 second instead of 2? If it means your system can generate twice the revenue then maybe, if no one will actually notice then it’s probably time to move on to the next problem.
My message here is that it’d be great if everything could be optimised to run in milliseconds, either through new functionality, code optimisation or throwing large amounts of hardware at a slow running task but we have to realise that some things take time. You will never backup a 20GB database in milliseconds, which is handy because only a few people on this planet would ever need to.