How I Reduced Data Upgrade Time By 78 Hours

Upgrade projects are lots of fun. They are full of challenges that keep you busy day and night all the time.

I have encountered a very interesting challenge on my last upgrade project. The object upgrade process was completed, the data upgrade procedure ready to go, but when we started first tests we realized that the data upgrade execution takes some 39 hours to complete just Step 1. Without even bothering to measure Step 2, we realized we need to do something about it. The customer is running a 24/7 business, and cannot accommodate for such a large downtime, just to upgrade data from NAV 2009 R2 to NAV 2013 R2.

Eventually, we got it down to under a second. If you want to learn how, read on.

At the heart of the problem there was G/L Entry table. There are 11 companies in the database, and the largest one has 54 million records in the G/L Entry table, so it can’t really be fast. But what was the reason that the G/L Entry table was involved in the Step 1 of the upgrade procedure in the first place? Now this was funny.

For this reason or other, in this specific localization, two code fields in the G/L Entry table have been declared as Text[20], instead of Code[20] since a very old version of NAV. They are G/L Account No. and Bal. Account No. and I am still trying to wrap my brain around the fact that somebody thought it was a smart idea to set these two fields to type Text. Microsoft has decided to change these to Code[20] and it was a decision that was not as smart as it was necessary.

Changing a field type is not rocket science: in step 1, move the fields to the upgrade table, empty the fields in the original table, and then in step 2, change the type of the field, move the fields from the upgrade table, and then empty and delete the upgrade table. Easy.

Everybody knows that some things are simply impossible until somebody, who doesn’t know that, makes them possible. (Albert Einsten, probably apocryphal)

However, doing that with hundreds of millions of records spread across 11 companies can by no means be a fast procedure, at least not the way how C/AL goes about it. We tried to beef up hardware, but it helped only a tiny fraction. We were still in high 30-ies in terms of hours for Step 1. Step 2 will take exactly the same time, because it needs to do exactly the same kind of work to move the data back to original fields.

Obviously, NAV with it’s loopie-loopie cursor type of data upgrade process is not the fastest animal on the farm. We needed to take this down to SQL if we really wanted to improve the speed.

Our next step was to give up on the upgrade table altogether. We added two additional fields to the G/L Entry table, both of type code, and we wrote a SQL script to copy the content of the original two fields to the two new fields, and then to set the two original fields to ‘’. And it helped. Step 1 was reduced from 39 hours down to 2 hours, meaning 4 hours down from 78 hours for both data upgrade steps. Quite an improvement. But still, the whole upgrade thing was around 18 hours of data-munching, which puts some pressure if we want to complete it in the course of a night – so we needed to squeeze out some more performance out of it.

An idea popped up to me, and it came from the way how NAV 2015 handles syncing data through the upgrade codeunits. Something like that must be possible. However, it would take quite some work, because there are tons of SIFT views and keys on the G/L Entry table that it makes it extremely difficult to write a script that properly kills and then restores all the object infrastructure.

Finally, I came up with this concept:

  1. Add two columns to the table using Development Environment.
  2. Drop the two columns (the same ones I added in step 1) from the underlying table using T-SQL. I now have my table out of sync, because the table object contains the two columns that are no longer present in the table.
  3. Rename original columns to the names of the two dropped columns using T-SQL. At this step, my table is still out of sync, but now my original columns are renamed –so effectively I have “moved” data to the two new columns I added in step 1.
  4. Add the two new columns with the original names, using T-SQL. Assign default constraints to both of them, so that they automatically insert a blank value to the field. At this stage, my table is back in sync, but the data from original columns is now in the new columns.
  5. Drop the default constraints from the table. I don’t need them because I have declared the columns as NOT NULL.

All in all, this took under a second to complete for all the 11 G/L Entry tables for each of the 11 companies. In step two, you just need to reverse the process, and it again takes under a second to complete.

Here’s the example of the script.

ALTER TABLE [Company1$G_L Entry]
  DROP COLUMN
    [Temp GLAccountNo],
    [Temp BalAccountNo];

exec sp_rename
  'dbo.[Company1$G_L Entry].[G_L Account No_]',
  'Temp GLAccountNo',
  'COLUMN';
exec sp_rename
  'dbo.[Company1$G_L Entry].[Bal_ Account No_]',
  'Temp BalAccountNo',
  'COLUMN';

ALTER TABLE [Company1$G_L Entry]
  ADD
    [G_L Account No_] [varchar](20) NOT NULL CONSTRAINT [Company1$def_gl] DEFAULT '',
    [Bal_ Account No_] [varchar](20) NOT NULL CONSTRAINT [Company1$def_bal] DEFAULT '';
ALTER TABLE [Company1$G_L Entry]
  DROP CONSTRAINT
    [Company1$def_gl],
    [Company1$def_bal]

Of course, when migrating data back, the script is slightly different:

ALTER TABLE [Company1$G_L Entry]
  DROP COLUMN
    [G_L Account No_],
    [Bal_ Account No_];

exec sp_rename
  'dbo.[Company1$G_L Entry].[Temp GLAccountNo]',
  'G_L Account No_',
  'COLUMN';
exec sp_rename
  'dbo.[Company1$G_L Entry].[Temp BalAccountNo]',
  'Bal_ Account No_',
  'COLUMN';

ALTER TABLE [Company1$G_L Entry]
  ADD
    [Temp GLAccountNo] [varchar](20) NOT NULL CONSTRAINT [Company1$def_gl] DEFAULT '',
    [Temp BalAccountNo] [varchar](20) NOT NULL CONSTRAINT [Company1$def_bal] DEFAULT '';
ALTER TABLE [Company1$G_L Entry]
  DROP CONSTRAINT
    [Company1$def_gl],
    [Company1$def_bal]

And thus these two little scripts have reduced the total upgrade time by 78 hours. Of course, the whole upgrade process still takes some 6 hours to execute, but there is a huge difference between 84 and 6.

I hope you find this little trick useful, and I hope it saves you some time in your time-critical upgrade projects.

Vjeko

Vjeko has been writing code for living since 1995, and he has shared his knowledge and experience in presentations, articles, blogs, and elsewhere since 2002. Hopelessly curious, passionate about technology, avid language learner no matter human or computer.

This Post Has 36 Comments

  1. markbrummel

    Good job!

    I remember that the Belgian database had text 20 too. The reason for that is the sorting of the classic database. The Belgian G/L Account had to sort 1,10,100. The only way to do that in classic is using a Text.

    This is obsolete with SQL Server, and now obviously Microsoft has decided to undo that change since classic database no longer exists.

    Cheers and thanks for the post.

    1. Vjeko

      So, I have saved many people 78 hours of data munching and watching the stopwatch 🙂 So, however reads this post and uses this trick, just send me your company details so I can invoice you 78 hours 😛

  2. Michael

    Thank You very much for this posting. It is a very good trick. But how is it, wenn the data in the text fields are not correct. If in the text fields are small letters then you have some problems in NAV when you copy this to a code field.

    1. Vjeko

      Well, in that case you should do an UPDATE on SQL level and simply change those columns to uppercase. We didn’t have that problem, because all values were strictly numeric.

  3. ptijsma

    Great trick! I imagine this gave you some time off during the weekend 😉

    1. Vjeko

      If I didn’t do this, I’d still be in front of my monitor watching data upgrade step 2 crawling at about 25% of the process 🙂

  4. Claus Rasmussen

    Do you have a good performance workaround for the unicode update happening when upgrading from nav 2009 to nav 2013?

    1. Vjeko

      @Claus, no unfortunately not, I don’t have a workaround for that 🙁

  5. c4r10k4

    Hi Vjeko!! Good Job!

    I am testing this example but in 2013R2 when i executed the query ALTER TABLE [COMPANY1$G_L Entry]
    DROP COLUMN
    [G_L Account No_];
    Shows error: The objetc ‘Company$G_L Entry$VSIFT$1s dependent on column ‘G_L Account No_’.

    I think I misunderstood … First i executed the script 1 …. and i import the objetc GLEntry of NAV2013R2 and after i executed the script 2??

    Thanks!

    1. Vjeko

      @carioka: you need to turn off all keys that have SIFTs. Then try and let me know.

  6. Jesús Almaraz Martín

    Thank you Vjeko. The original process takes you 84 hours but, did you try to run it removing all the keys but the primary first? Sorry if i am saying no sense , i am a newcomer so forgive me!!!

    I told that because is what I do to run this or similar steps.

    Regards from Spain, greats MVP training days.

    1. Vjeko

      Hi Jesus! No, this does not help, as this is not slow due to a key. It’s reading and writing millions of rows – and unnecessarily. In any case, thanks for the comments and for enjoying the class in Madrid! All the best for Christmas and new year!

  7. Gonzalo

    Hi Vjeko, do you mean that we have to execute that script in the step 1 of the following guide??

    0435.UpgradingNAV2009R2orNAV2009SP1toNAV2015.pdf

    The first step took me just several minutes, but I’m struggling with the step 6, (Synchronize Schema with option Later) it took me centuries.
    Gonzalo

    Thanks

    1. Vjeko

      @Gonzalo – Yes, you should do that. And about synchronize schema, it will take centuries, and there is nothing you can do to speed it up, really. It has to go from varchar to nvarchar on all columns in all tables, and that’s gonna take ages.

  8. Laura

    Hi Vjeko!
    We are trying to execute the first sqlscritpt and when we try to execute the sp_rename it becames a error message about dependencies.
    What we should do? Can we drop the dependencies?
    Thanks in advanced!

    1. Vjeko

      I don’t know what’s wrong. I don’t know what the error message about dependencies says exactly, so I can’t tell you what exactly to do.

    2. Phenno

      Check if you have vsift, meaning indexed views on that table. If yes, they should be dropped and regenerated later.

      1. BKR

        How can I regenerate these indexed views?

        1. Vjeko

          You can for example switch the MaintainSIFTIndex property on each of the keys, and then switch it on later.

          1. BKR

            Thx for your Response… unfortunately its not possible on the Primary keys…:S

  9. phenno

    Vjeko,

    I’m interested how long it lasted for you in the next step of converting DB to NAV2013 where NAV is dropping and recreating all the tables?

    1. Vjeko

      Whoa 🙂 You ask a lot. This was a long time ago, and I can’t remember exactly. All I can say is that it didn’t take all that much as the whole upgrade was completed between two work shifts.

      1. Phenno

        Vjeko,

        Meanwhile, I found out that nav2013 regenerates all keys. Since this step is repeated on nav2015 fob update the solution was to simply delete all indexes prior to conversion.

        Regarding your solution with different fields, I find it easier to make ado sql statement with update or insert rather than to play with fields. It os easier to implement, though probably slower o execute than your solution.

        Greetings from Sarajevo 🙂

        1. Vjeko

          Well, whatever works for you should be good enough here. The goal is to get the job done, I feel more at home in the Query Analyzer then wring code just to execute more code 🙂 Thanks for sharing your thoughts! And – pozdrav iz Phoenixa 🙂

  10. Fatos

    Hello, i want to ask a question: on avarage how much does the data upgrade takes in the medium sized companies. My partner said they can do the data upgrade in 6 days. Our DB is 500GB, and we have 3 company in 1 DB. Value entry total=111 milions ; Ledger Entry Dimension = 400 millions Posted Document Dimension= 30 millions Production Document dimension 4 millions. We have 12 dimensions. ok i know these datas are pretty much but i cant close the company for 6 days(no production no sales no entries..etc) is it possible? do you have any opinion about the time no matter it is aproximetly or not, your opinions are very valuable for me. Tahnks.(Our machine is Intel Xeon E5-2620v3 2,40Ghz Vt Core 8 / 150gb DDR4 VT RAM / 1,60 TB Raid 5 / Server 2012 R2)

    1. Vjeko

      RAID 5 is going to cost you much valuable time during upgrade. People still go with RAID 5 even though it has been proven time after time that it does not work well with NAV.

      Other than this, it’s going to take a lot of time, indeed. If you can take it company by company, it would be the best.

      Also – why do you have 111 million of value entries? Can you compress them?

      1. phenno

        As far as I know there is no compression method for value entries (actually for nothing in inventory ledgers).

  11. Phenno

    Pa attention that if you choose path company by company, last step will be to somehow merge last migrated company to already existing production. This could be done with import-navdata, but that can be very slow, also. For 10GB data up to 12 hours…

  12. Marc Pin

    Hi! Any idea for optimize the process o generation dimensions group on step2 in 2013r2???
    My process is very slow… 3 days

    1. Vjeko

      Hi Mark! No, sorry I don’t have any off-the-top-of-my-head idea here.

      1. Marc Pin

        In a first update I have created all the dimension groups. I have thought of inserting the groups by a SQL query and then launching the NAV2013R2 Step from the last move.

        1. Phenno

          If I remember correctly, in first step of object import in nav2009 there is an update of keys for dimension ledger tables which can last quite long but it seems to me that it is completely unnecessary. I’be tested with and without it and got same results. You could spare some time on that. Also, check your sql hardware, make sure to use SSD disks if possible and lot of RAM. Also tempdb is heavily used in dimension update so must be tuned properly. Check of maxdop is disabled, you could use it to utilize parallelism.

          1. Phenno

            Also, you should spare some time if you disable vsift indexes on tables with dimension fields before update, and enable them afterwards. In total it is faster and puts less stress on transaction log.

Leave a Reply