Top 5 SQL Server Improvements in NAV 2013

  • Post category:NAV 2013
  • Post comments:21 Comments
  • Reading time:7 mins read

imagePerformance is one of those things you can’t get enough of and NAV is one of those systems where an extra operation per second is always welcome. Yesterday, during the Expert Panel at the NAV day of the Decisions Spring conference, there was a question: is there any improvement in how NAV 2013 works on SQL Server.

And the answer is: oh yeah!

As a matter of fact, everything is new and improved.

Jörg has already posted an overview of the news of NAV on SQL Server in his last blog post, but I still think there’s room for a couple of more words on the really amazing palette of news and improvements.

As I said, the SQL Server improvements are plenty. Here’s the list of the top 5 technical improvements that rock my boat.

1. Cursors are gone

If there was a single thing that was killing performance in NAV, that was server-side cursors. The burden on SQL Server, especially in critical multi-user environments was tremendous, and I’ve seen server monsters crawling under pressure. The cursors are replaced with MARS (Multiple Active Result Sets), which basically take the browsing through recordset chore away from the SQL and assign it to the NST.

2. Caching

Apart from MARS, another killer improvement is the caching. Most of data access operations are cached on the NST, which results in a considerable reduction in the number of SQL Server calls. Now, caching alone is a great improvement, but caching + MARS is a winner.

Try profiling a simple thing, such as this:

IF Cust.FINDSET THEN

  REPEAT

  UNTIL Cust.NEXT = 0;

Run it a couple of times in a row. Under NAV 2013, you get a single SELECT against the SQL Server, then nothing else. The iteration happens on the NST, and every consecutive call to the same stuff does everything on the NST. Try that under NAV 2009, and the profiler goes bananas.

3. SIFTs

There are several improvements in how NAV 2013 handles SIFTs. First – you don’t have to explicitly declare SIFT fields on keys. You can do CALCFIELDS and CALCSUMS on any decimal field, regardless of the structure of keys on the source table. And SQL simply calculates the value. This relieves SQL from maintaining too many indexed views. Yes, I know, it also slows the read operations slightly, but did I mention the caching? Oh, sorry, I have. There.

Another improvement is that you can include the SIFT fields into the SQL statement, and get the SIFTs with the same single SELECT statement that NST issues against SQL. You do this with the SETAUTOCALCFIELDS statement which you call on a record variable just before you FIND or FINDSET the records.

Compare these two in the profiler, and it’s clear right away:

a) with CALCFIELDS

IF Cust.FINDSET THEN

REPEAT

// Balance is not calculated, we have to do it manually

Cust.CALCFIELDS(Balance);

UNTIL Cust.NEXT = 0;

b) with SETAUTOCALCFIELDS

Cust.SETAUTOCALCFIELDS(Balance);

IF Cust.FINDSET THEN

  REPEAT

    // No need for CALCFIELDS, Balance is returned already

  UNTIL Cust.NEXT = 0;

With the option a, whenever you hit the CALCFIELDS, the NST obeys and fetches the sum. With the option b, there is a single SELECT statement, which already includes the OUTER APPLY clause, which calculates the SUM for each row retrieved.

Pretty cool stuff.

4. ADO.NET

The whole shebang is now run on ADO.NET, instead of OLEDB/ODBC that it was before. There are plenty of benefits of that, performance included.

ADO.NET streamlines deployment and administration, increases performance, reduces the number of SQL connections (Jörg has explained some drawbacks of this access, but I think generally that this is a good thing), reduces the memory consumption, and maybe a couple other things.

5. Unicode

I’ve already blogged about this, Jörg has also mentioned this, so I won’t play the same tune yet another time. NAV is now Unicode, which allows you to store characters in any language, at the same time.

Unfortunately, Unicode is not as Unicode as I’d truly love it to be, because the object captions remain tied to the chosen database collation (yes, you still need to choose this). That practically means that while you’ll be able to store characters from any alphabet, your RTC user interface will remain limited to a single character set.

Wrap up

So, to wrap it up, there is a lot of new things, bigger or smaller, that have been changed and that warrant better performance, or user experience, or both.

You may notice that I didn’t mention queries. Yes, they are a mind-boggling improvement over previous versions, but they are simply a completely new feature, not something that NAV had, and now has better than before. My list here is the list of tweaks and tune-ups that take those things that we are used to have to a new level altogether. Queries? Well, they are out of this world, but their true power is yet to come – when (I’m kind of sure it’s about “when”, not “if”) we’ll be able to use them as sources for pages or reports.

Vjeko

Vjeko has been writing code for living since 1995, and he has shared his knowledge and experience in presentations, articles, blogs, and elsewhere since 2002. Hopelessly curious, passionate about technology, avid language learner no matter human or computer.

This Post Has 21 Comments

  1. Benoit D.

    Very nice post !

  2. Peter

    Great article with great news 🙂
    However, it doesn’t quite match the findings of Per Mogensen (Mr. Mergetool) as he documents here: http://www.youtube.com/watch?v=18HoElGEs_0
    Any comments on that?

    1. Vjekoslav Babic

      Thanks, Peter! I don’t know how to comment on that, really. It obviously shows that it runs the slowest under RTC on 2013, which may be due to several things. What Per Mogensen’s experiment doesn’t show is the overall performance, and I am pretty sure all variation comes from the client specifics, not from the DB layer.

  3. Peter

    So you suggest it might be due to changes in the NAV posting routine or demodata, that it now is slower? Then it would be great to see him repeat the test with a NAV2013 client on a NAV2009 database.
    It could also be the interesting to look more at the changed locking principle – it might be a bit slower, but give better concurrency.
    I’m also still a bit worried about the changed Dimension handling. No doubt it will result in less writes, but I fear it will also result in a lot more reads to find the Dimension Set ID.

  4. Vjekoslav Babic

    No, I suggest it’s because of the overhead that the RTC does. Classic client has everything locally, it runs everyting from the same box. RTC has to talk a lot to NST and it loses time – this may explain why the execution is slower with RTC than Classic.
    My suggestion here is – maybe the SQL communication itself is equal or maybe (hopefully :)) faster, but we can’t test that the way Per is testing that – because there is RTC which is in the way and constantly talks to the NST (you see all those messages, progress bars etc that pop in and out of view).
    So – if the test is executed without any user interface, basically a function which calls a codeunit which then executes the work without ever having to come back to the client, then you have an actual benchmark of the back-end code against the SQL, and then if that’s slower, then it’s probably slower because of the DB layer.
    So – to re-state this, the original Per’s benchmark doesn’t show how fast SQL Server performs, it only shows how the whole hodgepodge performs.

  5. Peter

    But his tests shows the 2013 RTC beta is slower than the 2009 RTC…

  6. Marcel Lathouwers

    Thanks for your post.
    My concern is more on the NST. A lot of caching and other tricks are done there now. First of all it sounds like the hardware requirements of the NST needs a boost, not?
    My worriers are that the inefficiency just moves from SQL to the NST, but hopefully I’m wrong.

  7. Vjekoslav Babic

    Peter – I’ve done intensive testing myself now, and I definitely there is something fishy with Per’s tests. I’ll publish my results here on Monday, together with the tests themselves – so whoever doesn’t believe, can check for him/herself.

  8. Vjekoslav Babic

    Marcel, I am not sure if the hardware requirements would require a boost, but NST seems very optimized compared to the old version. Caching doesn’t mean you’ll need to boost the hardware, especially because you can tune the caching with the configuration options. If you don’t want the cache, you can skip it.
    Also, about the inefficiencies – I’d say that it was about time they moved away from SQL – we have a beast of the RDBMS who wins all benchmarking tests, yet it creeps under NAV because NAV was using it inefficiently. NAV is now finally doing efficient handling of SQL as much as possible. I would not call caching an “inefficiency” because caching is what every system does to improve performance. Think of what hardware storage solutions do – beef up the hardware with 32 GB of RAM, and then cache your whole database to provide snappy response times. If that’s what NST does, then I hope it does even more of that, and if that requires me to add 2, 4, 8, 16 or however much more RAM to the box, I’d do it, because that box should be dedicated to running NAV, and should be capable of handling the workload, which should be proportional with both the number of users, and thus the capability of the company to pay those couple of hundred or thousand dollars for couple of more gigs of RAM, don’t you agree?

  9. Peter

    Microsoft has also described the changes here: http://msdn.microsoft.com/en-us/library/hh169480(v=nav.70).aspx

    However; I’m still a bit puzzled about the caching when having multiple NAV Servers. I assume any reads in a locked table always ignores the cache, but I fear a lot of old custom code might have worked fine without a LOCKTABLE in years. Now this code will often get outdated data, and might produce inconsistency in the database.

    Is it just me being paranoid? 🙂

    1. Vjekoslav Babic

      Peter, thanks for the link. I keep forgetting that they now publish all help information to the Web. Regarding the LOCKTABLE, I don’t have a clue before I test, but I assume there will be more changes between 2013 Beta, and 2013 RTM. I didn’t have chance to test the LOCKTABLE behavior yet, but I did test some other transaction types, as well as a bit of concurrency, and it does leave a bit to be desired, and I’ll blog about it soon.

Leave a Reply