Yesterday, Panorama Consulting Group has published their new ERP report, and it was a really interesting read. It shows some interesting effects of economic recession on ERP industry, but it proves once again that ERP projects are risky, prone to budgeting and scheduling issues, and still chances to get reasonable benefits are roughly 50:50.
As reported on Panorama’s blog, these are the key findings:
- The average ERP implementation cost dropped from $6.2 million to $5.48 million.
- The average project duration dropped from 18.4 months to 14.3 months.
- The percentage of companies who realized between 51- and 100-percent of anticipated business benefits increased from 33-percent to 42-percent.
- The percentage of companies who realized 50-percent or less of anticipated business benefits decreased from 67-percent to 48-percent.
- The percentage of companies who realized 30-percent or less of anticipated business benefits decreased from 55-percent to 21-percent.
- The percentage of companies reporting project overruns (61.1-percent) and budget overruns (74.1-percent) increased significantly from 2009 (35.5-percent and 51.4-percent, respectively).
- In 2010, the percentage of companies who chose not to customize their solution at all (15-percent) was nearly half what it was in 2009 (28.3-percent).
- The percentage of companies who developed a business case as part of their implementation process rose from 85-percent in 2009 to 97-percent in 2010.
What I was especially thrilled about this report is that it shows the level of adoption of SaaS ERP solutions, and a decreasing trend in on-premises deployment. At 17% of the market, SaaS seems to become a serious option. I would say that this is expected, and I’m looking forward to next year’s report – I’m sure SaaS will be well above 20%. This is something that existing ERP vendors should be truly worried about – either they’ll adapt, or they’ll perish pretty soon.
What I dislike about this year’s report is exactly the same thing I disliked about last year’s (although I didn’t blog about it then): there is no consistent methodology in what is analyzed and reported. Metrics in 2009 differed the metrics in 2010, the report in 2010 has reported different findings of 2009 than the report of 2009 itself (probably because of changes in methodology), and now once again we have completely different metrics reported. Some have remained, but every year we get a different angle at the same thing. Are we doing it better, or worse? I can’t say for sure.