Skip to content

Git with NAV

It’s been a veeeeery long time since my last post. I was so busy at work and had to deal with some personal issues (health) and didn’t bother about blogging that much…

Anyway lot’s of things happend, I gave up (at least for now) on the compare tool i was creating, it was soo time consuming and almost ate up all of my spare time, and since we got the Application Merge Utilities from Microsoft, the need for this is not that strong as it used to be.

But eventually i will use it (or parts of it) in other projects.

But for now the big topic is Git!! I’am not the first one blogging about it, git and source control in general (TFSVC for Example) was all over the place. And I for myself wanted to use a SCM (Source Control Management) for quite some time. So this year we finally had time to do so.

After lots of thinking and testing and analyzing of our workflows we decided to use Git as our source control system and Team Foundation Server for Project Management. The reason for this was mainly the complexity of NAV versions that inherit from each other (W1 -> Local -> Local ISV -> Customer Solution) We can separate these in Repositories while still maintaining their links through remotes.

For different Versions of ISV Solutions (NAV2009, NAV2013, NAV2013R2, NAV2015, …) we will be using different branches. This however might change, if anybody has another idea or can think of an issue with this, please tell me about it! 
I could think of having these in separate repos too, but I’am not sure if this would be overkill.

For Interacting with NAV (Importing/Exporting source code, tracking changes, etc) I extended our tool that we use internally.
This tool is used to manage all NAV versions in use (similar to existing tools such as DynamicsNAV protocol handler) but advanced in its behavior and built with around a central sharepoint list of databases. Very cool tool that handles protocol requests, register com interface for flawless page opening (not opening second client) and many other things to ensure a true multi-client handling.
The management tool was extended around the git idea, added with some simple git tasks like commit, pull, push, and some basic TFS tasks like adding workitems to commits, setting the state of them at commit, a workitem-query browser, object tracking based on sql triggers on the object-table, import/export trough NAV’s hidden COM Interface…
Lots of things going on to make the life of our developers easy.

We still need Visual Studio & git bash for certain tasks, but everyday NAV developer business is easy to manage.

Up to this point I’am very happy we took the time and switched to git/TFS for development, and I hope more people will follow and eventually some day even the NAV team will support this out of the box 😉

Accessing Compressed Blobs from outside NAV (NAV2013) (Revisited)

I already bloged about this some time ago: https://devch.wordpress.com/2013/01/17/accessing-compressed-blobs-from-outside-nav-nav2013/
But this was in the beta phase of NAV2013, and since then i did not use the code for any real project. Since there were a few comments about it not realy working i revisited the solution and encountered that it infact was not a completly working solution.

So here is the working solution for this, it enables you to read compressed & non-compressed BLOBs. I tested it with the Object & the Object Metadata Table.

class NAV2013BLOBReader
    {
        public static int GetDecompressedStream2013(SqlDataReader reader, bool decompress, ref MemoryStream ResultStream)
        {
            int fieldIndex = 0;
            int num2 = reader.IsDBNull(fieldIndex) ? 0 : ((int)reader.GetBytes(fieldIndex, 0L, null, 0, 0));
            if (num2 > 0)
            {
                using (Stream stream = new BlobReaderStream(reader, fieldIndex, num2))
                {
                    if (decompress)
                    {
                        byte[] array = new byte[4];
                        long num3 = (long)stream.Read(array, 0, 4);
                        if (num3 < 4L || !BlobMagicOk(array))
                        {
                            int result = 22926086;
                            return result;
                        }

                        using (DeflateStream deflateStream = new DeflateStream(stream, CompressionMode.Decompress))
                        {
                            deflateStream.CopyTo(ResultStream);
                        }
                    }
                    else
                        stream.CopyTo(ResultStream);
                    return 1;
                }
            }

            return 0;
        }

        private static bool BlobMagicOk(byte[] checkMagic)
        {
            for (int i = 0; i < 4; i++)
            {
                if (checkMagic[i] != BlobMagic[i])
                {
                    return false;
                }
            }
            return true;
        }

        private static byte[] BlobMagic = new byte[]
        {
	        2,
	        69,
	        125,
	        91
        };
    }
    public class BlobReaderStream : System.IO.Stream
    {
        private System.Data.SqlClient.SqlDataReader reader;
        private int fieldIndex;
        private int length;
        private int position;
        public override bool CanRead
        {
            get
            {
                return true;
            }
        }
        public override bool CanSeek
        {
            get
            {
                return false;
            }
        }
        public override bool CanWrite
        {
            get
            {
                return false;
            }
        }
        public override long Length
        {
            get
            {
                return (long)this.length;
            }
        }
        public override long Position
        {
            get
            {
                return (long)this.position;
            }
            set
            {
                this.position = (int)value;
            }
        }
        public BlobReaderStream(System.Data.SqlClient.SqlDataReader reader, int fieldIndex, int length)
        {
            this.reader = reader;
            this.fieldIndex = fieldIndex;
            this.length = length;
        }
        public override void Flush()
        {
        }
        public override long Seek(long offset, System.IO.SeekOrigin origin)
        {
            throw new System.NotSupportedException();
        }
        public override void SetLength(long value)
        {
            throw new System.NotSupportedException();
        }
        public override int Read(byte[] buffer, int offset, int count)
        {
            if (this.position < this.length)
            {
                int num = (int)this.reader.GetBytes(this.fieldIndex, (long)this.position, buffer, offset, count);
                this.position += num;
                return num;
            }
            return 0;
        }
        public override void Write(byte[] buffer, int offset, int count)
        {
            throw new System.NotSupportedException();
        }
    }

To call the method simply use something like this:

            string SQLServer = "<Your Instance>";
            string Database = "<Your Database>";
            using (SqlConnection connection = new SqlConnection(string.Format("server={0};Trusted_Connection=yes;database={1};connection timeout=5", SQLServer, Database)))
            {
                connection.Open();
                SqlCommand command = new SqlCommand(string.Format("select [Metadata] from [Object Metadata] where [Object Type] = 1 AND [Object ID] = {0}", ID), connection);

                SqlDataReader reader = command.ExecuteReader();

                if (reader.Read())
                {

                    var ms = new MemoryStream();
                    if (NAV2013_Compressed_Stream.GetDecompressedStream2013(reader, true, ref ms) == 1)
                    {
                        ms.Position = 0;
                        StreamReader Sreader = new StreamReader(ms);
                        string text = Sreader.ReadToEnd();
                        return text;
                    }
                }
            }

Extending NAV: Adding Try-Catch

Most modern languages offer a decent error handling that enables you to catch errors and perform actions. NAV offers such an option as well, but its not so nice to implement and it sometimes you are even forced to waste a precious codeunit just to make it work.

Just for repetition, or if you were not aware of this:
If you call a codeunit by wrapping an IF statement arround the call of the RUN function you can catch an error happening  in this codeunit and do your own handling.

That’s done like this:

IF NOT Codeunit.RUN(Codeunit::"Sales-Post") THEN BEGIN
// => Do error handling here
END;

Today I show you have you can do this in a generic way and without the waste if precious codeunits for each of your error catching (at least you only need one for all catching).

  1. We create a codeunit that we call Try-Catch (like its known from the C# language).
  2. Then we add a function to set an ID to identify what we want to catch (which function/logic)
  3. Further we add a function that enables us to specify parameters for our call, such es codes, a record, etc
  4. Lastly we need the handling or implementation, that calls the actual logic already existing that you want to catch

The codeunit would then look like this:


OBJECT Codeunit 3032230 TryCatch
{
 OBJECT-PROPERTIES
 {
 Date=01.07.13;
 Time=16:02:18;
 Modified=Yes;
 Version List=;
 }
 PROPERTIES
 {
 OnRun=BEGIN
 CASE FunctionID OF
 1 : TestFunction();
 2 : TestFunction2();
 3 : TestFunction3();
 // Extend here:

END;
 END;

}
 CODE
 {
 VAR
 Parameters@1100113000 : ARRAY [100] OF Variant;
 FunctionID@1100113001 : Integer;

PROCEDURE SetFunctionID@1100113004(_ID@1100113000 : Integer);
 BEGIN
 FunctionID := _ID;
 END;

PROCEDURE SetParameter@1100113000(_Index@1100113001 : Integer;VAR _Parameter@1100113000 : Variant);
 BEGIN
 Parameters[_Index] := _Parameter;
 END;

LOCAL PROCEDURE "------------------------------------"@1100113006();
 BEGIN
 END;

LOCAL PROCEDURE TestFunction@1100113001();
 BEGIN
 ERROR('Hello World');
 END;

LOCAL PROCEDURE TestFunction2@1100113008();
 BEGIN
 MESSAGE('param text: %1', Parameters[1]);
 END;

LOCAL PROCEDURE TestFunction3@1100113009();
 VAR
 Item@1100113000 : Record 27;
 BEGIN
 Item := Parameters[1];
 MESSAGE('param Item: %1', Item."No.");
 END;

BEGIN
 {

 }
 END.
 }
}

As you can see you can extend the codeunit with new implementation just by adding a new function with your own logic. You just need to assign your function to a FunctionID in the Code() Trigger. Parameters can be accessed by the Parameters Array, which is of type Variant. Even complex datatypes like a record can be passed by this method.

The usage would look like this:

TryCatch.SetFunctionID(1);

TextParam := 'test text';
TryCatch.SetParameter(1, TextParam);

IF NOT TryCatch.RUN THEN
  MESSAGE('%1\%2\%3', GETLASTERRORCODE, GETLASTERRORTEXT, GETLASTERRORCALLSTACK);

If you need to do lots of error handling this can be quite useful 🙂

Extending ExcelBuffer (NAV2013)

Since NAV2013, the ExcelBuffer no longer uses automations to create Excel Sheets. Therefore if you have custom implementation, like color formating, or text size, you need to rewrite your solution with the use of openxml. I had to implement this for our product, so i’d like to present what i came up with.

First, please read the blog post from Lars-Bo Christensen : http://blogs.msdn.com/b/nav/archive/2012/10/05/use-open-xml-to-extend-the-excel-buffer-functionality-part-2-of-2.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDynamicsNavTeamBlog+%28Microsoft+Dynamics+NAV+Team+Blog%29
It discribes the basic idea and the concept of it.

For my own implementation i used the same concept but pushed it a bit further and made it a bit simpler (in my opinion). My goal was to not rewrite my existing usage code from earlier versions. Therefore every manipulation needed to happen inside the basic functions of ExcelBuffer.

I ended up with these 2 functions:

Globals

Name	DataType	Subtype	Length
CustomFontPool	DotNet	System.Collections.Generic.Dictionary`2.'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

GetCustomCellDecorator

GetCustomCellDecorator(IsBold : Boolean;IsItalic : Boolean;IsUnderlined : Boolean;Color : Integer;FontSize : Integer;VAR Decorator : DotNet "Microsoft.Dynamics.Nav.OpenXml.Spreadsheet.CellDecorator")
GetCellDecorator(IsBold, IsItalic, IsUnderlined, BaseDecorator);
Decorator := BaseDecorator;

// Handle Extension
IF (Color <> 0) OR (FontSize <> 0) THEN BEGIN

  FontIndex := STRSUBSTNO('%1|%2|%3|%4|%5',IsBold, IsItalic, IsUnderlined, Color, FontSize);

  CustomFont := BaseDecorator.Font.CloneNode(TRUE);

  // Color
  IF Color <> 0 THEN BEGIN
    CustomColor := CustomColor.Color;
    CASE Color OF
      3 :   CustomColor.Rgb := HexColor.HexBinaryValue('00FF0000'); // Red
      5 :   CustomColor.Rgb := HexColor.HexBinaryValue('001B1BC3'); // Blue
      10 :  CustomColor.Rgb := HexColor.HexBinaryValue('0022B400'); // Green
    END;
    CustomFont.Color := CustomColor;
  END;

  // Font Size
  IF FontSize <> 0 THEN BEGIN
    CustomFontSize := CustomFontSize.FontSize;
    CustomFontSize.Val := FontSizeValue.DoubleValue(FontSize);
    CustomFont.FontSize := CustomFontSize;
  END;

  Fonts := XlWrkBkWriter.Workbook.WorkbookPart.WorkbookStylesPart.Stylesheet.Fonts;
  AddFontToCollection(Fonts, CustomFont, FontIndex);

  Decorator.Font := CustomFont;
END;

Locals:

Name	DataType	Subtype	Length
CustomFont	DotNet	DocumentFormat.OpenXml.Spreadsheet.Font.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
CustomFontSize	DotNet	DocumentFormat.OpenXml.Spreadsheet.FontSize.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
CustomColor	DotNet	DocumentFormat.OpenXml.Spreadsheet.Color.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
HexColor	DotNet	DocumentFormat.OpenXml.HexBinaryValue.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
BaseDecorator	DotNet	Microsoft.Dynamics.Nav.OpenXml.Spreadsheet.CellDecorator.'Microsoft.Dynamics.Nav.OpenXml, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
Fonts	DotNet	DocumentFormat.OpenXml.Spreadsheet.Fonts.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
FontSizeValue	DotNet	DocumentFormat.OpenXml.DoubleValue.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
FontIndex	Text

AddFontToCollection

AddFontToCollection(_Fonts : DotNet "DocumentFormat.OpenXml.Spreadsheet.Fonts";VAR _CustomFont : DotNet "DocumentFormat.OpenXml.OpenXmlElement";_Index : Text) : Boolean
IF ISNULL(CustomFontPool) THEN
  CustomFontPool := CustomFontPool.Dictionary();

IF CustomFontPool.TryGetValue(_Index, TempFont) THEN BEGIN
  // Already in Collection
  _CustomFont := TempFont;
  EXIT;
END ELSE BEGIN

  // OpenXML Element Array
  Arr := Arr.CreateInstance(GETDOTNETTYPE(_CustomFont),1);
  Arr.SetValue(_CustomFont,0);

  _Fonts.Append(Arr);
  _Fonts.Count.Value := _Fonts.Count.Value + 1;

  CustomFontPool.Add(_Index, _CustomFont);
END;

Locals

Name	DataType	Subtype	Length
i	Integer
Arr	DotNet	System.Array.'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
TempFont	DotNet	DocumentFormat.OpenXml.OpenXmlElement.'DocumentFormat.OpenXml, Version=2.0.5022.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'

The newly created fonts are ached in the CustomFontPool collection to ensure that only necessary fonts are created, otherwise already created fonts are reused.

To use the new Custom Decorator we just replace the two lines where ExcelBuffer refers to the base function GetDefaultDecorator() (which are in WriteCellValue & WriteCellFormula):

//GetCellDecorator(Bold,Italic,Underline,Decorator);
GetCustomCellDecorator(Bold,Italic,Underline,Color,"Font Size",Decorator);

Color & “Font Size” are fields on the ExcelBuffer table which can be set when needed.

As you can see, there is no need to create an assembly in this implementation (unlike the one from Lars), just plain C/AL with .net Interop. Also with this implementation you can continue to use ExcelBuffer as you did it in the past. We just changed the decorator used to visualize the cells.

.Net Interop: Using Enumerations

If you are using .Net Interop you may came across waldos blog post about enumerations. I already successfully used this nice trick some times.
Then lately i had to deal with another excel import, so i thought why not using interop instead of automation.This would have been a pleasure if the database was a NAV2013, because MS provided assemblies to do the “hard” work, you can have a look at it when you go to the redesigned ExcelBuffer Table. But since it was a 2009 R2 Database i started from scratch.

So i started using the ‘Microsoft.Office.Interop.Excel’ assembly and tried it that way. That all worked very well since the code is almost identical to what we’re used to with com automation.

But then i wanted to open an existing workbook with the method which has a parameter of type “Microsoft.Office.Interop.Excel.XlPlatform” which is en Enum. So far so good, lets do it with waldos trick to have some nice looking understandable code (instead of some integer values…).

BUT WAIT!

Oh dear… there is no instance of this Type available like in the example… hmm… if i pass an not instanciated .net interop variable to the EnumToInt function it will crash saying it’s not instanciated…
Thats bad, should i use an integer and let it be unreadable?
Of course not! There is another solution to this which even works when you don’t have an instance of the Enum you want to Parse. Here comes the code


// xlPlatformFullName is a TextConstant with this content:
// Microsoft.Office.Interop.Excel.XlPlatform, Microsoft.Office.Interop.Excel, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c
//
xlWorkBook := xlApp.Workbooks.Open(Filename, 0,TRUE,5,'','',TRUE,EnumToInt(xlPlatformFullName, 'xlWindows'),'\t',FALSE,FALSE,0,TRUE,1,0);

EnumToInt(_FullyQualifiedName : Text[1024];_EnumValue : Text[120]) : Integer
  EXIT(Convert.ToInt32(
    Enum.Parse(
      Type.GetType(_FullyQualifiedName,TRUE,TRUE),
        _EnumValue)
    )
  );

Variables

 Name    DataType    Subtype    Length
 Type    DotNet    'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Type
 Enum    DotNet    'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Enum
 Convert    DotNet    'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Convert
 

You use the EnumToInt Function by providing the Fully Qualified Name of the Enum Type you want to assign and its value.
If you don’t know the fully qualified name you can just try to create a variable of that type. In the Subtype column in NAV you will find all the information you need. It’s just in the wrong order.

Lets have a look at xlPlatform for as an example. If you declare it as a NAV variable you get this as subtype:

'Microsoft.Office.Interop.Excel, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.Microsoft.Office.Interop.Excel.XlPlatform

the result should look like this:

Microsoft.Office.Interop.Excel.XlPlatform, Microsoft.Office.Interop.Excel, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c

Basicly you just need to delete the high comma [‘] chars and put the type name which is in the variable decleration at the end (“Microsoft.Office.Interop.Excel.XlPlatform”), to the front and seperate it with a comma. In the example the Version and Culture are switched too, but guess that one is not absolutly necessary. But to be sure, keep the Order:[Full Type Name With Namespaces], [Assembly Name without extension], [Version], [Culture], [PublicKeyToken]

Accessing Compressed Blobs from outside NAV (NAV2013)

21.01.14 – This Solution was not completly working, here is the new solutions, tested on NAV2013 R2 : https://devch.wordpress.com/2014/01/21/accessing-compressed-blobs-from-outside-nav-nav2013-revisited/

 

Just a short hint for anyone interested in accessing BLOB Fields in NAV. In the new Version, the BLOB compression is now longer a proprietary format, NAV now uses the .net deflate Class to compress the binary objects. (In a previous blogpost (https://devch.wordpress.com/2012/01/12/read-content-of-nav-notesblobs-from-sql/) I stated the need for disabling the compression on BLOB fields to access them from outside NAV)

Therefore you can no easily use the DeflateStream Class to decompress the content and access it.

So to decompress the content, the method would look something like this:

    public static long Decompress(Stream inp, Stream outp)
    {
    	byte[]  buf = new byte[BUF_SIZE];
    	long    nBytes = 0;

    	// Decompress the contents of the input file
    	using (inp = new DeflateStream(inp, CompressionMode.Decompress))
    	{
    		int len;
    		while ((len = inp.Read(buf, 0, buf.Length)) &gt; 0)
    		{
    			// Write the data block to the decompressed output stream
    			outp.Write(buf, 0, len);
    			nBytes += len;
    		}
    	}
    	// Done
    	return nBytes;
    }

Corrupt NAV SQL Database

Quite some time passed by since my last post, i’am sorry for that, christmas is comming, projects are going on, i guess you’re all in the same situation.
Oh yeah and the compare tool consumed a lot of time too.

So what i want to talk about is a very strange problem i faced on a customers database. We had situations where standard NAV 4.0 Code leaded to an “Entry Already exists” error.
Nothing special about this you would normaly think, i understand that. But this situation was different, i can 200% guaranty you that that record did not exists in the database! Since a check on the Insert statement (IF rec.Insert THEN) did return a false, and since i whatched the sql table to know what was going on.

So there was this function (Get Shipment Lines on an Invoice) that occasionaly threw this error. Only on some kind of data. I did like 100 of tests and scratched my head, i posted on mibuso (read the complete story here: http://www.mibuso.com/forum/viewtopic.php?f=23&t=51779)   but i did not receive any useful advice…

But i managed to resolve the issue and this is why i post this: In fact there was some kind of data corruption in this sql database, which i only was able to resolve by making a native backup (fbk) and creating a new sql database where I imported the backup.

I’ve never seen such data corruptions  in NAV databases and i want to make you aware that there is such a possibility.
As I already said on mibuso: i realy hope nobody faces problems like this, but if so i hope these people will find this blog post and are able to resolve their issues by recreating their database this way.

have a nice week

A compare tool dedicated to nav object text files and its pitfalls

Since i wrote about it yesterday on luc van vugts blog, i think it’s now time to tell about a project i’ve been working on for quite some time.

The project is about a tool that enables you to bypass some of the pitfalls of nav text object comparing, when done in a regular text-comapre tool like wimerge, beyondcompare and so on.

The basic concept of the tool is to take the files, parse them, generate objects out of the different parts and threat them in the context they have. This enables to do context sensitive comparing of  these objects. An typical example for this would be 2 version of an object, for example table 27 (Item), one of them from a W1 Database and one of a localized Database. Now you want to merge some code done in either of the versions to the other one. But you are forced to go through all the captions shown as differnces, which slows down your merging process.

There are many such cases where a traditional text-compare won’t give you a good view on what is actually different on those objects. And here comes the contextual compare in the game.

With the tool i’am working on you will be able to do things like:
– Ignore missing captions
– Route Different captions (eg. Set a rule that ENG = ENU)
– Identify Parts that are contextual the same (like 2 Variables with the same name) but are in different places inside the text-file, those will be thrated as equal not different
– Set different strategies to find the coresponding part in the other object (Find by ID, by Name, by SourceExpr, etc…)
– Various ignore filters
– Save merging (no id conflicts, new id’s will be crated)
– modified files will always be consistens, nomore missing ‘{‘ brackets because of manual file editing
– and many more

There are also plans to include handy features like some sort of a object analyzer (Where used, etc), a easy object renumbering, batch code insertion, etc.

But the main advantage is that the different parts of the objects are threated and compared as such, and not just a  flat textfile. You should be able to decrease your comparing effort a lot when dealing with structure changes or captions and lots of other things.

A special feature but maby very handy one is the form designer as you can see it in picture 2. this will enable you to efectivly merge forms with direct preview of what you will get. This no longer forces you to check your fields in the nav designer to ensure they will be shown correctly. The desginer enables you to drag and drop controls, resize them, etc.

But because i think a picture tells more then 1000 words, have a look at this example:

If you have andy questions about the tool, or you think you have a nice feature in mind that should be implemented, please feel free to contact me, or post a comment!

Standard Security Model & Custom Fields on the User Table

Another day, another problem 😉

Last friday we had a issue when moving a sql database from an old server to a newer one. The problem was that suddenly some poeple were not able to print some reports.
Long story made short:

The issue was realted to custom fields on Table 2000000002 User. This fields were updated when the print button gets pressed and that is very bad when using standard security model. When using standard model all permissions gets synchronized when the user table is modified, since we do this before printing in the context of a normal user which doesn’t have the db_securityadmin right in sql, this will fail, and the error message will be this:

"The Microsoft Dynamics NAV Classic and SQL Server security systems have not been successfully synchronized.  The SQL Server login <User> does not exist on the <Server> server."

I don’t realy know why it worked on the old server and to be true, i don’t want to know… because if every user is able to synchronize permissions, somethings seriously wrong  imo (btw: this security setting weren’t done by us).
So to solve the issue, we moved the fields from User table to User Setup Table and all is working fine.

So if you ever need to have fields on the User Table then consider moving them to the User Setup table because they belong there anyway.

Error when adding a sql user that was deleted before

Recently we encountered an sql problem that you get a sql error message when adding a user in navision that once existed in the database, then was deleted.

The error message says that the user already exists. But the error message is wrong 😉 because you removed the user so it doesn’t exist right?
Part of… the problem is when you remove the suer in navision it will be removed in the NAV Database but the user exists in the master db too.  This entry is created when you enter the user name in the User Table in NAV. Since the entry is not removed when you delete the User in NAV, you have to manualy delete the user in the master table in order to be able to add the User in NAV!

So if are not able to create a user in nav because it already exists, go to the master table and search for an existing entry for that user.