NAV performance in various configurations

  • Post category:NAV 2016
  • Post comments:1 Comment
  • Reading time:9 mins read

Yes, I have been a lazy blogger lately. With so many people blogging about NAV nowadays, it’s really difficult to come up with original content all the time.

Let venture a little bit out of my .NET and JavaScript comfort zone, and hang around a more formidable beast: SQL. And as you know, SQL comes in many shapes. It comes in shape of your rack server, or in shape of your laptop. But it also comes in shape of clouds. Many clouds.

So, I decided to test performance of NAV under various configurations, including on-prem, Azure VM and SQL Azure, and share my findings with you. And while I am writing these lines, there are four machines sweating the crap out under some performance test code I recently wrote.

As soon as I collect the results, I’ll share them with you in a series of three posts:

  • On-prem performance results
  • Azure VM performance results
  • SQL Azure performance results

Now, some general notes about these tests and their results.

Performance vs. Concurrency

First of all, let’s clear this thing out. I am not testing concurrency, which is how many users you can unleash at the same time in the same tenant or same NST, or whatnot. I don’t care (yet) about that. I might or might not get around to testing concurrency at a later time, but for now I am focusing on performance.

Performance is raw output that specific setup can produce for a single user at one time.

Think of it like this: performance is how fast your car can drive. Concurrency is how many cars (of the same or different types) a road can sustain at the same time without resulting in crashes or jams.

Why do I care about performance?

For a very simple reason. I wanted to measure what kind of hardware (or virtual) configuration you need to achieve certain performance goal. Not everyone needs hundreds of users. Measuring a single user under a specific configuration gives you a nice picture of how money you’ll need to shell out for hardware, or how big a virtual machine size you’ll need to go for.

Any specifics about my goals?

Yes.

For on-prem, I wanted to find out what mattered most in terms of hardware. Is it a fast processor, or fast disks, SSD vs HDD and similar.

For Azure, I wanted to find out how different machine sizes affect performance, and whether you should go for A tier, D tier, or maybe G tier, and then how big inside the tier.

For SQL Azure, I wanted to find out which size to go for, and also to see where it behaves well, and where not that well.

How did I measure

All of the tests run in iterations, and for some tests there are tens of thousands of iterations. Also, all tests log their results into the database. So, measuring time elapsed between a specific test began and completed wouldn’t be fair, and would be measuring overhead that I don’t care about.

Therefore, I used .NET System.Diagnostics.Stopwatch class which allows very precise time measurement, with a possibility to stop and resume measuring in the middle of an iteration. This way I was able to precisely measure a specific operation, regardless of what else happens during an iteration or during test preparation and completion stages.

Many tests include data rollbacks as parts of each iteration, to allow every iteration to work on exactly the same dataset. When rollbacks are included, then the rollback time is not measured.

All tests ran on a CRONUS database from NAV 2016 CU5.

I restored the database, imported the objects, ran all tests once to warm up the configuration, then deleted the results and then ran all tests five times in a row. Then I collected the raw data and calculated the averages.

The tests

I have designed a total of 14 different tests, to test different aspects of both NST and SQL performance, and some of those to measure how well they work together.

So, here’s the list of the tests I ran.

  • No. Series Assignment: As its name implies, it measures time to assign numbers from a number series. In each iteration, the test clears database caches, and then assigns a new number from the number series.
  • Random Vendor Read: This test measures random read performance. In each iteration it first clears the caches, then reads the first vendor from the database, and then jumps to a random vendor in the database. In essence, it measures how well SQL responds to simple queries.
  • Random Vendor Write: The “random” in the name here is misleading (a relic form an earlier state of the test, which I noticed too late). It measures the time it takes to insert a new vendor with OnInsert trigger executed.
  • Cursor Read: This one pushes the SQL Server engine to the limit. In every iteration it clears the cache, then reads all items from the database exclusively using the FINDFIRST function with a little help of primary key filter. It sends a number of very inefficient queries to SQL Server by simulating a cursor-like iteration.
  • GUID Insert/Delete: This is another one that pushes SQL Server engine to the limit. It writes into a table that has a clustered index over a GUID field. In each iteration it inserts one thousands rows, and then deletes rows one by one while there are rows in the table, this time sorting them by another, non-indexed GUID field. This test takes the SQL layer apart on so many different levels, and it’s going to be amazing to see how different tiers handle it, or – better yet – struggle to handle it.
  • Modify All Vendors: This is another primarily business logic test that tests the overall performance of the system, rather than a specific aspect of SQL. Here, in each iteration, on a clean cache, all vendors are modified with OnModify trigger. It causes a lot of traffic on both NST and SQL ends.
  • Post General Journal: Another business logic test. In each iteration it simply posts a general journal consisting of two lines. This test is a good indicator of the overall system performance.
  • Post Sales and Purchase Orders: The last of the business logic tests. This one posts a sales order and then a purchase order, for the same item, at the same location, with posting dates increasing and decreasing respectively, all with automatic cost adjustment and automatic cost posting to G/L. This test measures the performance several of the most critical system operations and gives you a good feeling of how the system will behave in real life.
  • No-index Filtered Read: This is another SQL test. In each iteration it sets a unique filter on a text field to force SQL into a table scan. Since the filter is based on GUID, there is no way for SQL to outsmart it, so it really has to go all the way on foot.
  • Unique Read: Another crazy C/AL exercise for SQL that simulates cursor-based access with inefficient filtering to expose raw SQL performance under difficult conditions. In each iteration it navigates through G/L Entries sorted by account number, and then jumps between different account numbers to identify unique account numbers used. Very inefficient, and designed to test raw SQL performance.
  • Modify All G/L Entries: This one does MODIFYALL without trigger, to show how SQL Server performs under optimized conditions. This test exposes strengths of the SQL tier because it allows it to write data in the most efficient way possible.
  • Delete All G/L Entries: Similar to the previous one, only this one deletes all entries without trigger. Same idea, only more intensive of SQL storage management as this one exposes how efficiently SQL Server manages page (de)allocation.
  • Rollback Speed: This test first deletes all G/L entries from the database then measures how well SQL performs at rolling back this change. Another interesting test to pay attention to as it does a pretty good job at exposing raw SQL performance.
  • Run Query: This test runs a fairly complex query and retrieves the results. Even though it is designed to show how SQL performs at its best (because querying the data is one of those things SQL is designed to perform bl***y efficiently), I believe this test is pretty much useless. After the first run of the query, SQL will cache the execution plan and keep the query results in memory, so when another iteration hits in it will reuse both so I am not really sure what this test measures. If I could spare more time here and add some random filters to prevent SQL from reusing the cache, this test would expose much more than it currently does.

So, that’s it. My tests have now ran on a range of different tiers, and it’s time to go and compile the results.

Stay tuned, as over the next few days there will be a series of three posts (and perhaps more) letting you now how different on-prem, Azure VM, and SQL Azure tiers performed under the duress I described above.

Vjeko

Vjeko has been writing code for living since 1995, and he has shared his knowledge and experience in presentations, articles, blogs, and elsewhere since 2002. Hopelessly curious, passionate about technology, avid language learner no matter human or computer.

This Post Has One Comment

Leave a Reply