Since this one caught me out and a few other people it seemed like a good idea to blog about this. Sometime ago the Azure portal got a make over, which l have say make the portal look very pretty. Whilst setting up a proof of concept for PowerBI l thought it was also a good time to look at setting up an Azure SQL database. The last time l did this was in the previous portal. So l duly set up the database, one of the last steps was to allow the IP address from my work organisation access to the database. So l goggled and found this page - https://azure.microsoft.com/en-gb/documentation/articles/sql-database-configure-firewall-settings/ The Wrong Way Open the new style Azure Portal Click on browse (1) From the list of blades click once on SQL databases (Figure 1) Figure 1 - Opening the SQL databases blade Click on the 'SQL databases' blade this will open a list of the all the SQL databases that are in your account (Figure 2) Figure 2 - List of SQL databases showing area where to click to show settings Clicking on the ellipsis will cause a new menu window to appear (Figure 3) Figure 3 - Clicking on the ellipsis to get the settings link Click once on the settings link this will then open the settings window (Figure 4)
Open the new style Azure Portal Click on browse (1) From the list of blades click once on SQL servers (Figure 5) Figure 5 - Opening the SQL servers blade Click on the SQL Servers blade and this will open a list of all the SQL servers in your account (Figure 6) Figure 6 - List of SQL servers Move the mouse over the row which is the SQL server that require to see the settings for. Click once with left hand mouse button the selected SQL server in Figure 6 this is SQL server 'kjsgbci03i'. This will then open Settings window on the far right hand side (see Figure 7) Figure 7 - Settings page for SQL server Click once on the firewall row and this will open the 'Firewall settings' window (see Figure 8) Figure 8 - Firewall settings window
Final Note Splitting the settings for SQL servers and SQL databases, does make sense, the same logic is used within SSMS. That said in the old style portal, the link to the page for setting allowed IP Addresses is under 'SQL databases' (see Figure 9). Hence why l must have looked at SQL databases blade first. At least that's my excuse and l am sticking to it ;-> Figure 9 - SQL server page with link to Manage allowed IP addresses (Firewall rules) in the old style Auzure portal.
The core application my current company has built allows people to apply for grants. We track the application through various stages, much like a loan application there are many stages that the application might go through. The status of the application is not linear. So the applications can go back and forth between the same status more than once. In figure 1 below an application .can given a status of ‘under review’ then get a status of ‘submitted’ or ‘Pre-Submission’. So, an application might have the same status more than once with different dates.
Figure 1 –showing the workflow an application might take. One question which our customers want to know is how long an application takes to move from one status to another. This question could be answered using the existing application database. Using that method is not ideal as the queries are quite complex and slow. This is one of the reasons I decided to build a Datawarehouse as a proof of concept. To see if there was a quicker way to answer that question.
Calculating time difference Given that an application can be allocated a status more than once during its lifecycle. This has an impact on the method we used for calculating the time difference. The methodology used to get the time difference is defined below
The fact table in the POC Datawarehouse, stores the ApplicationID, Status, and the datetime the application was allocated that status. If an application has been assigned the same status then there would be a record for each time the application has received that status. This has some advantages; the end user can easily understand the data as it is shown in the table. On the other hand, this presented some difficulties with calculating the time period between each status. After various attempts the following calculated column was created. In this case calculating the days between a grant receiving Pre-Submitted status and Rejected status.
Figure 2 – First version of the calculated column DaysFromSubmittedToRejected
For both values the logic used is very similar, for the variable FirstSubmittedDate the MIN() function is used to get the first date. The FILTER statement only returns records with same ApplicationID and Status = ‘Submitted’. Whereas the variable LastRejectedDate the MAX() function is used to return the last date. Using variables, means the logic can be re-used for other statuses, and the DATEDIFF() function is easier to read. When the results are displayed they were not as expected.
Figure 3 – results from the first version of the DaysFromSubmittedToRejected function see Figure 2
The requirement was to return the number of Days elapsed From Submitted To Rejected with the value returned against the record which is the last time the application received that status. So, another function was created to see if the current record is the LastRejectedDate, which is shown in Figure 4.
With this calculated column (LastRejectedDate) in place the return statement of ‘DaysFromSubmittedToRejected’ function just requires a small change which is as follows
DaysFromSubmittedToRejected = VAR FirstSubmittedDate = CALCULATE ( MIN ( 'FactAppStatuses'[DateAddedAt] ), FILTER ( ALL ( 'FactAppStatuses' ), 'FactAppStatuses'[ApplicationID] = EARLIER ( 'FactAppStatuses'[ApplicationID] ) && 'FactAppStatuses'[Status] = ‘Submitted’ ) ) VAR LastRejectedDate = CALCULATE ( MAX ( 'FactAppStatuses'[DateAddedAt] ), FILTER ( ALL ( 'FactAppStatuses' ), 'FactAppStatuses'[ApplicationID] = EARLIER ( 'FactAppStatuses'[ApplicationID] ) && 'FactAppStatuses'[Status] = 'Rejected’ ) ) RETURN IF ( 'FactGrantStatuses'[LastRejectedDate] = "Y", DATEDIFF ( FirstSubmittedDate, LastRejectedDate, DAY ), 0 )
Figure 5 – Second version of the calculated column DaysFromSubmittedToRejected
When the function is added to the PowerBI report shown in Figure 5 then run then the results can be see in Figure 6 below.
Figure 6 – results from the second version of the DaysFromSubmittedToRejected function (see Figure 5) The addition of the IF() statement uses the value to check if current row contains the LastRejectedDate. Otherwise it returns a zero. This returns the record set which is expected. The positive uses of this method are that the same calculation can be re-used for a variety of statuses only requiring a minor change to the formula. On the negative side the more calculated columns there are the longer it would take to generate a report. No doubt there are other approaches that can be taken to improve this. The time taken to evaluate the calculated column will be slower as the number of rows increases. On the other hand, the current method is significantly faster than doing the same calculations on the application database. Even better with the use of a PowerBI report or a tabular data model. These figures could easily be used by an end user to reports which provide insight into their data. All the better that our clients could do this for themselves, with relative ease.
So what am I going to show in the following blog posts?
So that’s what I am going to show, next let’s look at what is required to do it.
Ingredients
First download and install Azure Data Studio you can download the program from here.
Once you have installed Azure Data Studio, open the application. In Azure Data Studio in the menu find ‘File’ and click it, from the menu select ‘New Notebook’ see Figure 1 below.
This will open a new notebook (yippee!!) this might not sound very exciting yet, however it is! When a new notebook opens the Kernel must be set. The way that I think about this that it sets the language which will be run in the notebook, and will default to SQL. What we want run is Python v3. From the list of Kernels available selected ‘Python 3’, this will set the language that will be run in the notebook.
Figure 2 – selecting the Kernel (programming language) that will be run in the notebook.
Once ‘Python 3’ has been selected and if Python is not set up and installed, then Azure Data Studio will prompt you to set up and configure Python for Notebooks. A screen will open as we can see in Figure 3. For this blog post I accepted the default location and clicked on the ‘install’ button.
Figure 3 – Install and configure python for use in Azure Data Studio
If everything has gone to plan, then you should see something that looks like Figure 4.
Figure 4 – installation of Python going as planned
Installing of Python can take sometime so it might be good idea to get a hot beverage or do something else till it is finished installing.
Figure 5 – Installation of python is now completed successfully
In sessionize.com it is possible to create different API’s to output data, with this example the data is outputted as JSON. It is possible to select different parts of the data to be outputted, in this example ‘All Data’ is selected. Selecting the data from sessionize.com is beyond the scope of this blog post, it is very easy to do though.
In figure 6 the last step is to get the URL to be called in the code, this can be seen in Figure 6 below.
Figure 6 - API /Embed screen in Session.com for Data Scotland 2019.
In figure 6a (yes I forgot to include this till a later edit) is the columns that are outputted from Sessions.com for the API endpoint used.
Figure 6a - Settings for Available API endpoint used in this blog post.
Ok enough setting up lets write some code. To get access to other libraries in Python, the command that is used is import <library name>. In this example there are four libraries which are imported to be used. If you run the code shown in figure 7 you might get the error message shown.
Figure 7 – Error message if the package for the library being imported is not installed.
If you do see this error message then all you need to do is install the required package. In figure 7 at the top left hand side there a button titled ‘Install Packages’. Click on that button and the terminal window will open (see Figure 8). The command that installs the library ‘pyodbc’ is ‘.\python.exe - m pip install pyodbc’, type the command into the terminal window and press enter.
Figure 8 – Entering the command to install the ‘pyodbc’ package in the terminal window.
Hopefully the ‘pyodbc’ package will install without any challenges. If like me you are not so lucky and you get the error message shown in Figure 9. Then this is quite easy to fix.
Figure 9 – Error message stating PIP (Pip Installs Packages) requires to be upgraded to install ‘pyodbc’ package If you get the error message shown in Figure 9 then enter the following command at the prompt ‘.\python.exe - m pip install –upgrade pip’. If everything goes well you will see a message like the one shown in Figure 10.
Figure 10 – Successfully upgraded PIP to v 18.
Once the new version of PIP has been installed restart Azure Data Studio. Then open a notebook select Python 3 as the kernel language then click on the ‘Install Packages’ and install ‘pyobdc’ library (see Figure 8). Once ‘pyobc’ has been installed, it is now time to run the Python script
The Python Script will do the following
1 - call the API call and get the Json string returned is this into a dict Object which is then cast to a string object.
2 - open a connection to a SQL database run SQL script to create table if does not exist
3 - insert Json string into field in the table
Below is the Python script that is used. Much of the credit must go to the various websites which I have add references to in the script. In figure 10 we can see the script that is used. All that is require to change, is URL for the sessionize.com API, user credentials in the connection string. Otherwise this is the script is what I used.
Figure 11 - Python script in Azure Data Studio Notebook to import Json in SQL server 2016
The Azure Data Studio Notebook that is shown in Figure 11 can be downloaded from here.
In the next blog post we will look at how work with the Json data in SQL Server.
Figure 1 – The function ISJSON() returns 1 showing the data in the field [Data] is valid JSON
Now the JSON data is in the SQL database lets see about doing something useful with it. In SQL Server 2016 there was a number of new functions added that allow the querying and manipulation of JSON data. Having done some research, I found this blog post - https://visakhm.blogspot.com/2016/07/whats-new-in-sql-2016-native-json_13.html. Using code in this blog post I was able to extract the data from the JSON string supplied by the API from the sessionise.com website.
Before querying the data I need to explain one concept which is crucial for extracting data from structured JSON. In the example in Figure 1 below the path of the ‘title’ key value pair is as follows
Sessions.0.title this would have the key value pair 'title: “DAX Gotchas”' see Figure 2
Figure 2 – JSON data showing the sessions node and the first speaker node.
In the JSON object that was returned from sessionize.com API there are a number of nodes for each session. Starting with the number 0 through to 29 within each node there are a number of Key : Value pairs eg 'id : “117469”'. The path, nodes and arrays eg []speakers, and []categoryItems are what TSQL is going to extract values from. Enough with all that waffling about JSON objects, lets write some proper TSQL. In the next example we are going to use a function called OPENJSON(). This is only available in SQL 2016 or upwards. Using OPENJSON() in this example we are going to provide two arguments, @AllJson which contains the JSON object and must be datatype NVARCHAR(). Next is the path, the way I think about the path, is it specifies the node or array that I want to return from the @AllJson. The other function that we will use is JSON_VALUE(). This function also accepts two parameters, and an expression which is a variable or field name containing JSON data. The other one is path, the way I think about the path is it specifics the node or array that I want to return from the JSON data (yes I said that already just wanted to see if you are paying attention ;->). That’s a lot of words so let's look at some TSQL in Figure 3 below
Figure 3 – The JSON data from the sessions node returned as a result set in SSMS
When we look at Figure 3 we will notice that the first row of the data is the same as the data shown in Figure 2. In essence the FROM OPENJSON(@AllJson, ‘$.sessions’) is returning a dataset which consists of three fields namely Key, Value, and Type. The field Value contains the JSON object for all 30 session nodes. Next the JSON_VALUE() function takes the Json and extracts the value for one key pair. This is done by specifying the Key value for the 'Key:Value pair'. So in the case of title the path ‘$.title’ is supplied for the path parameter. Since there is only one 'Key:Value' pair where the Key = title, the value is return from the JSON_VALUE() function, and returned in the field ‘SessionTitle’. Looking at Figure 2, there is a Key:Value pair in the []speakers array. So sessions[0].id.value is “1174469”, the corresponding lookup value is speakers[7].sessions.value is “117469”. The two values are their locations in the JSON object are shown in Figure 4 below.
Figure 4 – Showing the lookup values for both sessions to speakers and vice versa.
So we know that we want to get access to the data in the []speakers array as this contains the list of speakerID’s for each session. How is this done? Well I found an answer in this blog post - https://visakhm.blogspot.com/2016/07/whats-new-in-sql-2016-native-json_13.html. Below in Figure 5 is the TSQL and result set.
Figure 5 – Updated query to return the speakerID from the []speakers array.
All we have done in the query shown in Figure 5 is to add a CROSS APPLY with a simple select statement. Now the speaker ID is returned, note that if there is more than one speakerID, such as in the case of sessionID 117615 (which has two awesome speakers). In which case the query returns two rows, returning a different speakerID for each, which is just what we wanted. Next let's have a look at returning data for the speaker's node. Below in Figure 6 the TSQL to return some data from the []speakers array.
Figure 6 – TSQL query to return data from the []speakers array
Looking at the query inside the CROSS APPLY
SELECT Value FROM OPENJSON(s.Value, '$.links') WHERE Value LIKE '%Twitter%' There are a couple things that are worth looking at. First it is possible to use a WHERE clause on the columns returned by the OPENJSON() function. The reason for using the WHERE clause is that the links node can contain more than one type of link. During development some of the speakers had a LinkedIn profile, which they then removed 🙁. So by now I am sure you are saying “show me the money”. After some work I created a query which extracts, the session, speaker and room information. Then returns it as a single result set as shown in Figure 7 below.
Figure 7 – Result set with Session, Speaker and room details
If you want to have a try yourself and play with the code then you will find
TSQL source code is in this Azure Data Studio Notebook is here
Python Code is in this Azure Data Studio Notebook is here
If you have not run the python code to import the data to import the data, then I have created a azure data studio notebook, containing the code to create the database and other tasks. The notebook can be found here.
Last, but very much not least why did I spend some much effort to get all the data out of the sessonize API? The end goal was to supply the data to SQL Server Report Builder (download from here https://www.microsoft.com/en-us/download/details.aspx?id=53613) . This standalone tool will allow you to build an SSRS report. Using this tool I created a report which when you run the report outputs pages that look like the one shown in Figure 8 below.
Figure 8- Data finally published on the SSRS report
Forgive me whilst I indulge in some nostalgia first. Whilst at SQL Bits 2016, I finally made the decision to start a SQL user group in Glasgow. If I remember correctly Craig Porteous attend the infamous SQL Bits party and met a gentleman dressed as a Wookie (William Durkin). Now Glasgow has a growing SQL server user group (thanks to a dedicated hard working team). Not only that we just had our second SQL conference! (Yes SQL Bits has lot to answer for ;->)
SQLGLA was a fantastic event it has left me with some wonderful feelings. This blog post is part thank you to those involved, also just to express some of my feelings / experiences of the event.
unsplash-logo rawpixel
Organizing Team
Craig Porteous, Louise Paterson and I are the people who organised this year's event. I felt we were a team (this is a theme, by the way). Each of us contributed to the success of the event, in a different way. The success of the event was not down a single person in my opinion. It was about us all coming together to make it a success. Craig did put a lot of work into making the event what it was. unsplash-logo William White
Volunteers
There are few words to express my gratitude about our team of volunteers. As an organizing team, we have all volunteered before so we knew what is expected of volunteers. There was a lovely mix of those who have done it before and few who have never done any volunteering. What a team though, they all worked really hard. Speaking personally, I am so proud of them, they made this event for me. Not once did I have to worry about if something was done or not being done. They just made it happen. Whilst I have said this before I cannot thank them enough for being such an amazing team, they exceeded all my expectations.
Speakers
Last year thanks to William Durkin we had such an amazing lineup, and yet this year lineup was even better! A few days before the event I was looking through the list. It would be a fair comment to say I was blown away by the quality of those who choose to speak at our event. The feedback that I have heard so far the attendees also agree 🙂 Looking the depth of experience and knowledge that was shared, was to me awe inspiring. Whats even better is they all came to Scotland to share their knowledge and expertise with us.
SQL Family
It’s a phrase that I've seen often on social media, these are not just words it's something that can be seen and experienced in real life. To experience it is very humbling and gratifying. A few weeks before the event I was contacted by Chris Taylor (@SQLGeordie) one of the speakers. The speakers had decided they wanted to get something for Craig & Karen Porteous who are expecting their second child. It was my privilege and honor to present the gift on their behalf. Hopefully, Craig will forgive me one day for gate-crashing the closing presentation to present the gift? As Craig knew nothing about this gift.
T-shirts
To make the event something that little different we wanted to find a design for the t-shirts that was rather special. Having worked with Emily Chappell over the years on various projects, her quirky designs and sense of style really works for me. So I suggested we should ask Emily for a design for the t-shirts. The first design was not what we wanted. So next we all got together in a coffee shop at lunchtime. We had a chat did some brainstorming, Emily then sent back a design which we all loved (see picture below). Personally, I’ve never been to a technical conference where they sold the t-shirts :-)
Next?
First there are few housekeeping duties to take care, which we are just finishing off just now. Followed by some discussions to see what will be the next step 🙂 All I can safely say is keep looking there will be some exciting news soon....
In the previous post (see here) l set up the blank datasets as parameters storing the values of the database server and name. Whilst attempting to create a suitable dataset to a report for an internal customer, l was frustrated that l could not do it using Power Query. Yet on the other hand l know that l could write some TSQL that would give me the dataset l wanted very easily. So l wondered if this was possible to combine the two? So lets return to the SQL.Database function which has this very nice example Sql.Database("localhost", "Northwind", [Query= select * from Customers]) The last setting is "Query as text - Provide a query text that is run on the server to return values". It's almost like someone thought it would be a good idea to allow a user to run a SQL query. A few minutes later l had the query l wanted to run and it looked something like this -:
DECLARE @IssueType AS VARCHAR(10) = 'Problem'; DECLARE @ClientName AS NVARCHAR(100) = 'RainbowPainters'; SELECT DATEPART(MONTH, [Issue_Created]) AS Month_Number ,DATENAME(MONTH, [Issue_Created]) AS Month_Name ,DATENAME(YEAR, [Issue_Created]) AS Year_Name ,COUNT(DATEPART(MONTH, [Issue_Created])) AS Total FROM [dbo].[IssueData] AS ID WHERE [Issue_type_name] = @IssueType AND [ClientName] = @ClientName GROUP BY DATEPART(MONTH, [Issue_Created]) ,DATENAME(MONTH, [Issue_Created]) ,DATENAME(YEAR, [Issue_Created]); Next when back to the dataset opened the 'Advanced Editor' and added the query, you can it in Figure 1 below Figure 1 - Advanced Editor window with the new query in place
To make it a bit easier to read below is the actual text from the 'Advanced Editor' window above. All that has been added is the text in red,and the query text which is in italics
letSource = Sql.Database(DatabaseServer, DatabaseName ,[Query = "DECLARE @IssueType AS VARCHAR(10) = 'Problem'; DECLARE @ClientName AS NVARCHAR(100) = 'RainbowPainters'; SELECT DATEPART(MONTH, [Issue_Created]) AS Month_Number ,DATENAME(MONTH, [Issue_Created]) AS Month_Name ,DATENAME(YEAR, [Issue_Created]) AS Year_Name ,COUNT(DATEPART(MONTH, [Issue_Created])) AS Total FROM [dbo].[IssueData] AS ID WHERE [Issue_type_name] = @IssueType AND [ClientName] = @ClientName GROUP BY DATEPART(MONTH, [Issue_Created]) ,DATENAME(MONTH, [Issue_Created]) ,DATENAME(YEAR, [Issue_Created]);" ]) in Source
When l refreshed the dataset, the data appeared :-) Job done? No, as often happens one client gets something and then everybody wants why they have. So l was asked if we could create the same set of reports for another customer. There was two choices at this point, l could just change the hard code client name, or could l parameterise the client name. Having heard this before l knew about passing parameters from excel to filter datasets see the following posts. https://blog.oraylis.de/2013/05/using-dynamic-parameter-values-in-power-query-queries/ http://blog.crossjoin.co.uk/2015/11/24/using-parameter-tables-to-control-data-refresh-in-power-bi/ Having looked at the blog posts which declare and then used parameters this looked like a simple concept. Kudos to the Power BI team for that. First step was to add additional dataset as a parameter. So l now had three dataset as parameters, the latest one being 'ClientName', which stores the client name
Figure 2 - List of datasets containing parameters So having looked at the blog posts above l realised that l just need to declare a query which would hold the query text and pass in the Client Name parameter to the query text. Following some trial and error, l ended up with the query text below let pQuery = "DECLARE @IssueType AS VARCHAR(10) = 'Problem'; DECLARE @ClientName AS NVARCHAR(100) = '" & ClientName & "'; SELECT DATEPART(MONTH, [Issue_Created]) AS Month_Number ,DATENAME(MONTH, [Issue_Created]) AS Month_Name ,DATENAME(YEAR, [Issue_Created]) AS Year_Name ,COUNT(DATEPART(MONTH, [Issue_Created])) AS Total FROM [dbo].[IssueData] AS ID WHERE [Issue_type_name] = @IssueType AND [ClientName] = @ClientName GROUP BY DATEPART(MONTH, [Issue_Created]) ,DATENAME(MONTH, [Issue_Created]) ,DATENAME(YEAR, [Issue_Created]);", Source = Sql.Database(DatabaseServer, DatabaseName,[Query=pQuery]) in Source
Figure 3 - showing the query text in the Advanced Editor Window Much to my delight this syntax works as expected. So l can now have a way that it is possible to specific parameters which can be passed to SQL server. From reading about M (Power Query Formula Language) there's more to learn. For the moment l am happy with the new found knowledge, whilst l play with some of the other technologies offered by PowerBI. The only downside is this only works in PowerBI Desktop :-(
One of the attractions of being a programmer was that l could find ways to make routine tasks easier and quicker. As our company and clients have discovered the joy of Power BI also the reports l have created. There are more clients are asking for reports. Some of these are the same reports to ensure everyone has the same view of their data. Scenario There are two databases l populate with data, one of these is on my machine, the other is an Azure database. As you would expect l use the local copy to test out various changes to the database. Then using the Redgate tool SQL Compare l can script any changes and apply them to the Azure database. It is possible to change the data source for each dataset easily using the following method. From the main window, on the 'Home' tab click once on the 'Edit Queries' button see Figure 1 Figure 1 - showing the location of the 'Edit Queries' button
This will open the 'Edit Queries' window now to see where the data source for a selected dataset. First select a dataset in the example shown in Figure 2, it is 'DimDate'. Next click on the 'Advanced Editor' button see figure 2.
Figure 2 - Showing location of 'Advanced Editor' button When the Advanced Editor window opens it will look something like the one shown in Figure 3 below. Figure 3 - Advanced Editor window, with M (Power Query Formula Language) query showing the Database Server Name and Database Name The syntax of the statement is relatively simple. In this case we are looking at the Sql.Database data function, the syntax is simple and looks like this -: Source = Sql.Database( "Name of Database Server", "Name Of Database") This is the text shown in Figure 3 let Source = Sql.Database("localhost", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate
To change the data source to point at the Azure database the only changes that have to made are as follows
let Source = Sql.Database("AzureDatabase", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate Job done. This was nice until you have to do it several times, and personally I got bored then wondered if there was a better way? I am glad you asked if there is a better way let me show you. Having seen some examples where the data source was an excel spreadsheet l was aware that it was possible to supply parameters to change the data source. From one spreadsheet to another one. The first step is to set up the parameters. Open the Query Editor window, click on the 'New Sources' button from the list click on 'Blank Query' (see Figure 4)
Having set up the DatabaseServer dataset (parameter value), repeat the process and create a DatabaseName dataset (parameter value). Figure 6 shows what you query editor window would look like if you entered the same details shown in this example.
* DatabaseServer - containing the value 'AzureDatabase' or the name of the database server you are going to connect to. * DatabaseName - containing the value 'DBData' or the name of the database you are going to connect to.
If we look at the advanced properties of the dataset 'DimDate' the data source would look like the one shown below. let Source = Sql.Database("AzureDatabase", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate Next step is to replace the database server name and database name with the names of the Datasets that have just been set up. So the data source will look like the one below. let Source = Sql.Database(DatabaseServer, DatabaseName), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate
The advanced query window should look like the one show in figure 7 below. When the data source is queried the values of 'DatabaseServer' and 'DatabaseName' are replaced by the values from the relevant dataset. In this PowerBI desktop report l just replaced all the hard coded instances of the database server name and database name with the relevant parameter. Which allows me to switch between two databases with the ease l wanted, only with PowerBI desktop :-( Figure 7 - Showing the final M (Power Query Formula Language) text (data source) If you click on this link you can download a copy of a .PBIX with the two dataset's shown above. There is more you can do with this you will need to wait till part two of this blog post for more details.