While we are waiting for Microsoft-hosted agents support (build without a VM) to be released, we have to have a dedicated build VM that generally sits there and does nothing. To save costs we can setup a schedule to turn it on and off, however, you have to align your build schedule with it, and it does not make a lot of sense if you do not have active development and do changes once in a while.
Here is a quick and dirty workaround. You can get Azure Virtual Machine Manager extension from visual studio marketplace.
It adds one task to stop or start a VM.
Now we need to modify standard build definition and add new task to start build VM.
You cannot add it to existing agent job, because this one is self-hosted and requires VM to be up and running, catch 22! So, add another agent job:
Bring it forward and add Azure VM task:
Now your pipeline should look like this:
And to make it work we need one more change. In the build options it demands DynamicsSDK and obviously new agent won’t have it and will fail, so I simply removed DynamicsSDK from the list, that’s why I call this quick and dirty!
To stop a VM after the build I put Azure VM task in the very beginning of the release pipeline that is automatically triggered after a successful build.
Using this neat extension, we can automatically start a VM before a build starts and then immediately turn it off after the build. I deliberately put stop task in the release pipeline, so it won’t stop VM if build fails and I can investigate or quickly rerun it without waiting for VM startup. Obviously, one day we will get ability to use MS hosted agent but meanwhile it may help you to save some $$.
In the previous blog post series we learned how to import simple CSV file. However, CSV and other text files may contain records of different types that should be imported to different tables or process differently. In this post we will enhance format and mapping created. Let’s say in our example first column represents records type (a, b or c):
In this case we need to change format and add new “CASE” element:
And 3 record sequences instead of one:
Note that each sequence has a predefined value for the String field, that’s how we tell ER that if record has “a” in the first column it should be parsed with RecordA sequence. Also, we changed Multiplicity to “One many” for the sequences, to tell ER that there is at least 1 record for each type. It could be set to “Zero many” if a record is optional or “Exactly one” if we expect it only once in a file.
Now we need to change mapping. Each sequence has system field “isMatched” that is populated if file record is matched to sequence pattern. We will use it to bind 3 record types to same model field, but in a real life examples different records may go to different tables, like header and lines.
Expression used is pretty simple, it takes value from the first record if it is mapped, if not, it checks second record and if it is not mapped as well it takes value from the third.
In this blog post we will create new mapping to map format to model. On the format designer screen, click “Map format to model” button. Create new record, select model definition, specify name and description:
Open designer. In the designer bind Record List from the format to Record List in the model and then bind fields accordingly.
Finally, we can test our format. For the test I’m going to use simple CSV file:
Go back to Model to Datasource mapping form, click run in the action pane and upload the file.
If we’ve done everything right, we will get XML file that contains data mapped to model or errors, if any:
At this stage we have Data Model, Format and Mapping that we’ve tested. In the next blog post we will do the last piece of the setup – map Model to Destination and test the whole import.
In this blog post we will create new Format. It represents document schema and is used to parse it. Go to Organization administration > Workspaces > Electronic reporting, select Data Model created in the previous post and create new configuration:
In the format designer Add root –> File:
Add sequence and set delimiter to New line – Windows (CR LF). It will tell ER that file has lines split by CR LF. It is possible to select CR for Mac or LF for Linux or specify a custom delimiter.
Add new sequence. This sequence will represent lines. Set Multiplicity to “One many” to say that at least one line is required.
Add another sequence. It will represent individual lines. Set delimiter to ‘,’, to split fields by comma. Use another delimiter, if required.
Add 3 fields:
In the end you should have format like this:
In the next post we will map format to model and test it!
In this post series I will show how to use Electronic Reporting (ER) to import a CSV file. This tool allows us to create new import process without a line of X++ code. It can be maintained by end users without developer’s help and does not require deployments, because can be easily transferred by XML export and import between environments.
I used documentation available, but it does not have enough details, so here I will try to explain the process step by step. We will start from a Data Model creation and go thought all the stages below:
For simplicity, I’m going to use a custom table that has 3 fields: String, Real and Int.
Data Model is an abstraction over destination\source tables and could be used by multiple different formats. In our case, it will be similar to destination table because it’s quite simple. To create it, go to Organization administration > Workspaces > Electronic reporting.
In the designer create model root node:
Add Records List:
Now add 3 fields, one for each in a source file:
In the end you should get this structure:
Change model status from Draft to Completed, it is required for next step.
I’m quite happy with how VS studio compiles code and always use build server to build packages, so this blog does not have actual reason behind, except question from d365fo.tools team. Guys are doing a great job and if you have not used their tools yet, give them a spin!
Code below find all referenced modules and compile them before compiling selected module, however, usually you reference standard modules that do not need to be built, so exclude everything that is not required.
private const string PackagesLocalDirectory = @"k:\AosService\PackagesLocalDirectory";
static void Main(string args)
string moduleToCompile = "Retail";
ServiceMetadataProvider provider = new ServiceMetadataProvider(PackagesLocalDirectory, true, true, true, moduleToCompile);
var modules = provider.GetReferencedModules();
//generally we don't need to build standard modules, here is just an example
foreach (var module in modules )
private static void CompileModule(string module)
Parameters parameters = new Parameters();
parameters.XppMetadataPath = PackagesLocalDirectory;
parameters.ModelModuleName = module;
parameters.ReferencedAssembliesFolder = PackagesLocalDirectory;
parameters.AssemblyOutputPath = System.IO.Path.Combine(PackagesLocalDirectory, module, "bin");