Power BI Monthly Digest-August 2018

Power BI Monthly

If you’ve been following this series, you know that each month we dive into the new releases from Power BI and we like to focus on the things we think will make the most impact. This month we’ll include the release of the #1 requested feature! Be sure to watch the video for a quick demo of each new feature we discuss here.

OK, here’s what’s new and exciting:

1. Bookmarking Groups – When creating bookmarks, there are basically two approaches – navigational bookmarks where we have an image, for example, that links you to something else – it navigates you within the report. Then we have other bookmarks that are meant for more of a storytelling perspective.

Up to now, although we can divide bookmarks into these 2 categories, when we look at the bookmark pane, we’ve had to put all the bookmarks into one bin. Wouldn’t it be good to store these different approaches in different places? Now we can do just that, giving you a better way to organize your bookmarks by navigational and storytelling.

2. Next is the #1 requested feature (drumroll please) – exporting reports to a PDF. Many of you may work with or for people who love to have reports printed and put in their hands, but there has not been a great way to do this. Now from the Power BI Desktop, we can export reports to a PDF and print them, quickly and easily. And when doing this, it will not print your wallpaper, only the report pages and if any of your report pages are marked are hidden, these will be excluded.

3. Themes are now generally available – The purpose of themes is to standardize your design process. These have been in Preview for quite a while but are now generally available. Again, check out our video for a quick look at themes. A cool thing to point out here is they’ve added a Theme file (an advanced file I’d call it) where you can literally define every asset of every single visual type, so you can specify colors or fonts for maps, bar charts, or whatever. So, you can standardize your report process across the entire enterprise.

4. Slicer restatement – A simple addition to an ‘oldie but goody’ here. If you have a slicer with a drop-down slicer style, when you select something from the drop down we can readily see it, so we can easily understand what filter has been applied. But if we click on List, we just got a list. What has been added is the ability to visibly list what has been applied in this filter.

In the past we’d have to scroll through the whole list to see what has been selected. Now we can go into the properties of the slicer and under Format/Slicer Header, we can go to Filter Restatement. When this is turned on, we can see above our list which ones have been selected instead of having to scroll through the list.

5. Conditional Formatting by Values – These are more enhancements added since we talked about this a few months ago. This new feature is conditional formatting by field value. With conditional formatting, we’re using some coloration and metric or form of evaluation to decide about performance or a KPI to easily display this. So, if we’re evaluating based on profits, for instance, we can use coloration to easily see if performance is good or bad.

What’s been added is we can leverage some conditional logic and look at a column and format it by the column value, basically a color from a field – so the field value will return back a color name or text color, either a literal string value, blue for example, or the hex representation of a color and that dictates how it should be colored. This will also be updated as data gets refreshed or surpasses or goes below the thresholds you’ve set. Check out the video of how to use this new feature.

6. Python integration is now in Preview. Another widely requested feature for those that write a lot of R code. Under Options you’ll see Python Scripting (or you can follow the same course as R set up) where you list the directories and the IDE that needs to open. With this little initial set up, it’s on and in place. You’ll now see under the Visualization section a Python visuals icon and you can leverage this using things like ggplot and all these types of libraries that allow for these data visualizations, so you can take advantage of this directly inside of Power BI.

You can also use this when bringing in data. Simply click on Get Data and Python script and you can easily put in Python script. If you haven’t been using Python, this language is frequently used for data science and analysis, so we can now do data cleansing and statistical analysis with some strong functionality done with Python that is not natively available inside Power BI. You can also use Python in the Query Editor to do Transforms in Python for data cleansing and data replacement.

7. Q&A inside the Power BI Desktop is now generally available. This allows you to ask questions of your data interactively within your reports just like you can in the Power BI Service.

8. Last but certainly not least, Data Categories on Measures. You’ve likely worked with Data Categories if you’ve worked with geography fields or links that need to be marked as a URL. We can now leverage calculated measures, so if you have a measure value that’s effectively populating a URL, you can actually have that link to click on inside your Power BI Report, instead of it showing up simply as text in the past. Plus, not only can you click on it in your report on the desktop, but that clickable link will also be on your exported PDF when using the feature we talked about earlier.

Hope you enjoyed our top August Power BI updates. Please take a moment to comment below and tell us what you like about the new updates and what you’re excited about. And remember to share with us any great use cases or cool examples you’ve tried, and we may showcase them in our monthly digest. See you next month!

On-Demand Training

Advertisements

Power BI Monthly Digest-July 2018

Power BI Monthly

Last month’s digest was a bit on the lighter side, but not this month! The Power BI team is great at listening to the community and July is packed with new updates to share with you. Be sure to watch the video included here for demos of all these new features and updates.

1. Let’s begin with something we’re excited about and have been waiting for; this Composite Model update comes with a few parts so let’s dig in. With this update we can now use Direct Query with Imported sources (insert celebration horn here!).

For those of you unfamiliar with Direct Query, it gives you the ability to connect directly (direct query) to a SQL Server database for instance, without having to import the data from any source and bring it in. Previously, if you were leveraging this feature, you couldn’t then also import data from another source, or even do direct query to another source.

This is a huge feature that we’ve been waiting on, but they didn’t stop there. Not only can we have an imported source along with a direct query source, but to use our demo example where we had 4 different tables as part of our direct query source, they created a ‘hybrid’ situation with your direct queries called Dual. You now have 3 different storage modes: Import, Direct Query and Dual.

2. Dual Mode , along with Composite Model,  is still in preview but is available to anyone, not just premium users. Before, if you wanted to do Direct Query, it was all or nothing. If I’m connecting to a SQL Data Warehouse, for example, and I know I wanted some of the performance elements for Direct Query for only a couple of tables, I would still have to bring all the necessary tables in the source connection. But let’s say we have a more static, reference table that is less dynamic within that set, but those fields will potentially be part of the visuals, now you can choose to Import only those static tables while still using Direct Query for the larger tables that are constantly changing

With Direct Query if my visual is using a field from a table that is part of Direct Query, it sends the query out to the source server and comes back with the response. From a performance perspective, you’re putting more workload on the SQL Server side.

But let’s say we have a list, countries or companies for instance, that we’re using as legends within our visuals, something that’s not changing often, we can change the table which is using these columns into “Dual” which will Import the data and help in rendering the visuals quicker because we are not waiting on a response from the server.

So, what Dual does is allows you to choose, let’s say 1 of the 4 tables in our demo, to have that import behavior. This definitely has impact from a performance perspective.

3. Another part of these new capabilities that is an element that comes into play for those doing data modeling, is called Many to Many Relationship. Now we all know when you’re in the relationship view that there was a section under cardinality that says Many to Many.

This new update is an answer to a work around that we’ve done whenever we have 2 tables that had non-unique values and that there was a relationship there, we had to create a bridge table (connecting them and using cross filtering) that would contain all the unique values. The update eliminates that need. We can connect 2 tables and create a relationship between columns that are non-unique.

This may be especially helpful to new Power BI users, as it’s common to see new users bring in 2 tables not realizing one of those tables must have a unique value that you’ll be relating them on. Now you can relate 2 tables where neither have a unique value and be able to connect to them.

You do need to realize this impacts the data you present when creating a visual based on Many to Many, but it’s a great add on and there’s still a message alerting you of this impact when you create this type of relationship.

Now let’s move from the modeling side of things to the reporting side.

1. Header changes –Under Report Settings you now have an option to select ‘Use the modern visual header with updated styling options’. With this on, the header is more compact, and they’ve added some additional setting and styling options. These changes impact the report creators, not so much the users/readers.

Under the Format section/Visual Headers, there are quite a few settings with this feature and many are fairly impactful from the report development side. A key note is when you select/deselect these options, you won’t see most of these changes until you deploy it to the service in Read mode.

2. It’s important to appreciate the smaller things, right? Some may have noticed in our demo that there are now little animations when working within the visualization pane, such as small animation of shifting from the Field section to the Format section and when expanding options. Also, within the Options, the Property label and Property value are listed above each other instead of the toggle being off to the side next to it – a cleaner, easier to read look. Keep an eye out for these in the remainder of our demo.

3. Another aesthetic change is Wallpaper. You’ll notice on the reporting canvas, the Wallpaper feature gives us a dotted line around our canvas. In the past, this area around our design area was grey, but now that area beyond our report design surface can have Wallpaper.

The purpose of this is to extend the theme shown throughout what you have on the screen, especially when deploying to the service and people can see the continued theme beyond your report. This is great for people who put images for those backgrounds that you can underlay, Wallpaper will show in an even greater space. You can make the Wallpaper a color as a bordering element, as well as making the background of the report transparent so readers will see the Wallpaper, not the background.

4. Tool Tips – we know everyone loves Tool Tips. In a past digest post we introduced the Report Page Tool Tips and the great functionality it brought, but sadly, two visuals had been left out – the table and the matrix – and not just on the Report Page, but with Tool Tips in general. Many asked about Report Page Tool Tips for the Table and Matrix, and our answer – they don’t even have regular Tool Tips! Well, they’ve finally brought them both to us; something we’re excited to see.

5. We previously talked about a feature in the Line or Combo Chart where we could control the width of the line or make it go away and have just dots. Another visual element has been added to this under the Format/Shapes Section, where we can move from a gradient line to a stepped appearance. This may be a great option for specific use cases with a Line or Combo Chart.

6. Lastly, some Sorting changes have been made for visuals. They have taken the options that we are used to and broken them out into individual options. Now there are separate options for choosing Ascending or Descending as well as separate are on selecting the field that should be used for sorting. There are more options now that need to be configured but it definitely has added a layer of clarity that was missing.

Believe it or not, there are more updates that came out in July; we just picked some of our favorites to share. If you want to learn about them all, check out the Power BI Blog.

That’s it for this month’s update. We hope you can get in and start trying some of these new features and updates right away. We’ll be back next month with Power BI Updates for August—see you then!

On-Demand Training

Power BI Monthly Digest–June 2018

Power BI Monthly

Welcome back to our Power BI Monthly Digest for June! This month there are fewer new features to share than previous months, but Microsoft always listens and here are some impactful changes we’d like to highlight.

1. Column Stroke Width – Last month we discussed this in relation to the combo chart where we could do a dot plot on top of a column chart and eliminate the line to compare 2 measures, instead of a trend. This allowed more transparency and cleaned up that chart a bit. Now they’ve taken this further, so not only can we more clearly compare two measures, but we can customize and format those two measures separately with the labels.

In our demo, we used the same chart as last month and you’ll see how a few things may be confusing about which label goes to which visual. This new feature allows us to go into the Data Labels section and customize the series that you have. You simply choose the series and format it by changing the text family, size and/or color, allowing us to clearly see the difference between the two measures.

2. Donut Chart or Pie Charts – Not typically a fan favorite but there certainly are use cases for these and there have been some changes made to the Donut Chart to improve the visibility. As you know, the Donut Chart gives us the open space in the middle, possibly for an image, label or text. This new feature allows us to control the thickness of the outline that illustrates the value we’re portraying.

Also, we could always label to show the category or the data or both, but we were missing the ability to position the labels. Now you can see how this relates to the scaling of the visual; you can have those labels be inside of the donut now. So, if you want to make the outline a bit bigger and you can then set preferences where some labels are outside, and some are inside that outline. This label positioning also goes for Pie Charts.

To do this, you simply go into Shapes in the Format section and choose the setting for Inner Radius. When you make the inner radius smaller, you have more room within the visual to display the labels. You can also adjust the labels inside the Detail Labels section to change their position to inside, outside or both, depending on the size of the chart and the inner radius.

3. Filtering and Sorting in the Data View – A long asked for request from customers. It’s always been sort of ‘there’, but besides helping with calculated columns, it didn’t do much. Power BI is based on Excel technology and Data View has been a feature inside Excel, and now we can filter and sort things inside the Data View in Power BI.

More importantly, if you do filter and sort, it does not impact anything in the report view. It’s exclusive to the Data View, which can be helpful to see what’s going on and basically let us better see what we want to look at.

4. Accessibility for Visually Impaired – High Contrast Mode has been around for a while in Windows to assist those with visual impairments. Up until now, when you turned on the High Contrast View in Windows, the visuals really didn’t inherit those properties, so the visual did not change. But now when you turn this on in Windows mode, these effects will be available on your Power BI visuals.

That’s it for this month. We’d like to hear what you think; what features you’re most interested in or excited about or about things you’re looking forward to in the future. You can always submit ideas to the Power BI site. If you’ve got some good examples of how you’ve implemented these and how creative you’re being with Power BI, we’d love to hear about it—we may even showcase your example!

Be sure to tune in next month for our Power BI Monthly Digest for July!

On-Demand Training

Power BI Monthly Digest–May 2018

Power BI Monthly

Welcome to the May edition of our Power BI Monthly Digest series. I’m here, along with Pragmatic Works Training Director, Devin Knight, to bring you all the latest features and updates for Power BI. Our April post covered some smaller features, but the May features we’ll discuss today are some bigger, more in-depth features that we think you’ll be excited about.

1. Conditional Formatting on a Different Field– Most of you are probably already familiar with this feature. Conditional formatting came about in the pursuit of making things easier to understand and read, but some new additions take this to another level. The basis of conditional formatting is, let’s say you choose a column and you want to display a color based on performance, profits for instance, with higher profit amounts in green and lower amounts in red.

Now, we can take a column and define it by a different color pattern. A traditional matrix typically contains a lot of information to take in. With conditional formatting and this new feature that added a field called ‘Color Based On’, we can set up color by rules and value ranges and bring a boring report to life and make it much easier for users to digest our information.

But wait, there’s more! Before we had that dropdown capability only for numeric values. But now we can do this for text and date. We can simply click on a date or text field, get the dropdown with the conditional formatting option and choose our color, adding more ability to easily see what we want to see in our report.

2. Sync Slicers – Advanced Options – Of course, we’re probably going to have multiple pages in a report and sync slicer was previously released giving us the ability to synchronize a slicer across multiple pages, instead of adding those in and then having the individuality of them dictate what’s happening on each page.

Now, we have an ‘advanced options’ dropdown within that section that gives us the capability to apply a group name to it. So, the idea of having one slicer that appears on multiple pages that we can configure, and the filters go across the other pages, we can now associate as many slicers as we like into the picture and have those be part of that same group. This helps on the development side, as well as the consumption side by having different slicers all joined together by that grouping.

3. Zero Stroke Width on Line Charts – This next feature is about visualizing your data on combo (column and line charts) and simply line charts. Line charts are typically about showing trends in your data or things over time. But what if you don’t have a trend element to your data, but want to use the line for points that exist at each exchange of data there or to compare two measures?

This new feature gives you the ability to set the line width to zero, therefore removing the line element using the Stroke Width property in the Format area, so you can have points set up on your visual. We can turn on a visualization feature called ‘Show Marks” and choose the size, shape and color of our points. In some use cases a line really serves no purpose on a report, this feature allows us to create a cleaner look that is more easily consumable by the end user.

4. Drillthrough on a Measure – Another fan favorite that has some new changes added. First off, drillthrough now has its own section instead of having to go into Filters and find drillthrough, where you could select a field and drop that in there. I used to view this as a baseball catcher waiting to receive a filter for the category you’ve told it, so if I say ‘region’, I’m waiting to catch a value of region.

Now we can go into any page, choose a data point and right click on it. We can pass this through our drillthrough report and the filter context, anything related to region (to stick with my example above) will be passed along and said report will be filtered.

But what if that point is from a matrix with a lot of cross sections built into it? Everything other than region would be ignored, unless we took time and effort to add in each individual field into the drillthrough area. With this new feature, if I want to keep and see all the fields I need and accept filter context that goes along with the point I selected, I simply need to click the ‘on’ button in the drillthrough section. Yes, it’s as easy as turning on a button.

There’s also another feature added to drillthrough that allows you to do drillthrough on measures, whether implicit or explicit and you’ll see the summarized values of these.

5. New From Web Experience (Preview) – This next feature is still in preview. You’re likely not going to base an entire data center around the Web Connector, but you might use the Web Connection to pull in data from the web to supplement a data source you already have, maybe internal company data, to help you make a decision.

I would describe the usability enhancements that they’ve made as the way all of us as users always wanted the web to work. When I use this, I love it but there are times when the URL that I input and how it’s reading it, doesn’t give me exactly what I’m looking for. There is a web preview tab where I could see that I wanted, maybe a table perhaps, but I couldn’t get to it. This new feature allows us to do that.

It sort of takes advantage of a feature called ‘Add Columns from Examples’. They’ve taken that element and you can now create a logic that allows the web source to pull information out of that URL. By providing an example of what you want the data to look like from a web source, it will pull out that specific area of the data instead of what Power BI thinks you’re looking for. It gives us control in the web source to choose what we want to see.

6. Incremental Refresh (Preview) – The last feature has probably been the most awaited by many Power BI users. If you’ve got lots of data and have been thinking, is there a way to incrementally load or refresh it, rather than loading millions or billions of rows every time I refresh? Now there is.

In traditional developer fashion, it will follow the same process of running SSIS packages. We can say, look at this column and I want to grab information based off date ranges we select. This is currently in preview and only for those who are Power BI Premium members, but this is it’s first iteration and we’re excited to see this make its introduction into Power BI, especially from an efficiency perspective.

These are some great, cool new features and some that we’ve been waiting on. Watch the video for quick demos of all these features so you can incorporate them into your reports right away. Be sure to subscribe to Pragmatic Works YouTube channel to view all our Power BI updates throughout the month. And we’ll be back next month with all that’s new and exciting with Power BI!

On-Demand Training

Power BI Monthly Digest–April 2018

Power BI Monthly

Welcome to the first edition of our new series called Power BI Monthly Digest. I’ll be your host, along with Training Director, Devin Knight, and we’ll be exploring the top updates/features for Power BI released each month. We will hone in on the high impact releases that we know you’ll be excited about!

1. New DAX function: COMBINEVALUES() – This new DAX function allows multi part keys to be handled in a more efficient way when using Direct Query. Using COMBINEVALUES generally results in a more efficient SQL query. Another use case is if you want to join a set of values with a column delimiter without having a set of nested CONCATENATE() functions.

2. Numeric Slicer – Has been in preview but is finally readily available. With this feature, if you have a numeric value that you want a filter on, you can bring the slicer in and depending on the data type that you select, it will allow you to have different contextual types of filters.

For example, if you want to bring in a spend value, it will give you a numeric slicer, and this new feature will give you a slider that you can move to change the range of values that you’re looking at. You can slide that range anywhere on the scale and it will effect everything in your report based on the selection you made with the slider, so you can quickly and easily change and effect your reports.

3. Q&A Explorer – The Q&A feature has been very popular since introduced, and this new update (although still in preview) takes integrating the Q&A experience in your reports for your users even further.

The Q&A feature allows users to double click anywhere in the background of your report and a question box will appear, where they can type in a question about the data and an answer will immediately pop up. Now the idea of Buttons has been added with Q&A Explorer.

You can now have a set of pre-suggested questions, giving guidance to your users. You can add a button with text that tells them “click here for suggested questions”, which when they click it will launch a new window with the suggested questions that you have chosen. If you put the initial work into creating these questions, you can even create multiple buttons with different categories, this will create more effective navigation for your users.

4. Linguistic Schema (A New Enhancement to Synonyms) – With the Synonyms feature, you can go in and create a list of synonyms inside your relationship view, so if a user types in any of the pre-created synonyms, they will get the information they need from the correct column returned to them.

But developers may not know all the synonyms that people are going to need. With Linguistic Schema, you can export the schema from your model, save it and hand that file over to someone who will know what people may ask, a business user for example. They can type in any synonyms for each column name, so these will be leveraged to report back the Q&A word texts. This file can then be re-imported to that schema file in the report.

We’re excited about these top features, as well as others like the new Custom Themes feature (Preview) and the Organizational Container for Visuals feature. All these features give you the capability to set up some standardization across your organization for development.

We’ve given you a quick view with our demo, so you can incorporate these new features right away in your Power BI reports. We hope you’re as excited as we are to use them! We’ll be back each month to share our insights of what’s new and exciting in Power BI.

On-Demand Training

Power BI – Problem, Design, Solution (Folder Source Option for JSON Files)

Hello Everyone! In this episode of  the “Problem, Design, Solution” series we will be talking about the “Folder” option inside the sources display for Power BI Desktop. This is a fantastic feature that works so very well with your standard Delimited/CSV (Comma Separated Value) files, making life easier by importing whatever files you have in a directory. The only pre-requisites that are needed in order to leverage this feature are that the files must be of the same file type and structure (same number of columns and names). There is one issue though, even when meetings these minimal standards sometimes the “Folder” option just doesn’t load the files like we would want, this is the case when we talk about JSON files. We will go on to explore how we can work around this issue by basically manipulating what the “Folder” option achieves but taking more control over the process. For those of you who appreciate a more visual approach there is a detailed video at the end of this blog that you can skip to. Even so this will be a very detailed and visual written companion to the video. I hope you enjoy!

Problem

Traditionally the “Folder” source option gracefully handles all of your delimited text files that are stored in a directory. This makes it extremely easy and quick to load that data into Power BI. Unfortunately there are some common formats or structures  that are used in text files that this option does not handle well, namely those in the JSON format. I will not be going into depth on an explanation of the JSON format as this whole blog is about how to load these types of text files into Power BI so it is assumed that you know what I mean when I say JSON. As you can see in the screenshot below, using the folder option is very simple but the outcome of what is achieved is not desirable, we can immediately see that we need to pivot the information to better suit our needs amongst other things.

FolderJSON

As you can see in the image below, after the “Folder” option performs all of its operations to bring the information into one table you can see all the steps in the queries pane. There is a way that you can actually move through these steps and get the output the way you want it but this can be time consuming. The reason I do not recommend this is because within the steps that are being taken there is an order of dependencies and usually when you make one change then you need to address those changes in another step that was dependent on the last one. Instead let’s talk about a way that we can take more control of this process from the beginning and still take advantage of the idea of being able to pull together all the data that exists in multiple files in a folder!

CombineFile

Design

Since we now understand how awesome the “Folder” option is inside of Power BI we need to figure out how to make this concept work for our JSON files. To start we can borrow from what we see in the previous screenshot. We won’t copy everything that the “Folder” option does but definitely a large majority, lets break it down:

1. We can see that there is an item that is pulling a sample file

2. We also can note that there is a table where transforms are being applied

3. Lastly the “Folder” option is using a function to pass through the values of all files found in the specified directory

So let’s now take these ideas and do them ourselves in four easy steps!

Step 1 – Point to One of The JSON Files In The Directory In order to Apply All the Transforms We Want

The way we will achieve this is by first locating the “JSON” option inside of the list of source options in Power BI. Once selected we will simply navigate to the directory we eventually want to load and select any of the JSON files that are there. It really doesn’t matter which of the files you choose because we have already discussed that the files existent here should already be of the same file type and structure. Once we have pointed to the file in question and hit “Okay” the file will be loaded into the Query Editor and we really only have one option here, to convert the data into a table format as seen below.

JSONFile

Once it has been converted to a table you can now do whatever transformations that need to occur, in the video example the only step we will take is to pivot the table and choose not to aggregate any of the values as seen below once again. Also I have gone ahead and renamed the table, “JsonFile”. We have now completed step one by pointing to a sample file and applying whatever necessary transforms. Now all we need to do is find a way to get a list of all files that exist in the folder that we want to process…

Step 2 – Use the Folder Option to Point to the Directory in Question

WHAT?!?!?! We just talked about doing this in the problem portion just above, so why would we want to do this! The answer is simple, we have already decided that the data isn’t in the right format when using the “Folder” option but there is one thing it does do right. The first column that is created is meant to capture the source of where the data was brought from, which in our case is the file name and that is exactly what we want. So after we select the folder to import from we can delete all columns except for “Source.Name”. We aren’t done just yet though because as you will quickly see the filename is repeated for however many columns we had in the file but Power BI provides us an easy solution. Go ahead and right-click the “Source.Name” column and you will see that there is an option to “Remove Duplicates”, let’s also rename this column to something friendly like “Filename” as we see below. Also rename the table to something friendly if needed, in my example I have left it as “Files”.

FileName

Voila! Now we have a distinct list of filenames that appear in the directory. As well, the way this connection is setup, if files are added and/or removed it will be able to keep up with these changes. So every time we refresh the connections we will have an accurate list of what is in the designated folder. Step number 2 now has a checkmark next to it, but now we need to figure out how to tie both these steps together. We want all the transforms we setup in step 1 to apply to whatever files have been discovered in step 2. The answer, use a function!

Step 3 – Create a Function of the First Table  Where We Transformed Our Sample JSON File

Here we actually need to re-visit the first table we created where we pivoted our table. We are going to modify the M query that was written in the background for us for this table. As you can see in the screenshot below there is a line of M that indicates the type of source connection that was used and the specific path the file we chose.

UseMQuery

This is where we are going to make a change but we must first declare a variable/parameter to the beginning of this query. The syntax will be as follows:

(file as text) =>


let
Source = Json.Document(File.Contents(“C:\Users\Manuel Quintana\Desktop\PDS JSON\Files\”&(file))),
#”Converted to Table” = Record.ToTable(Source),
#”Pivoted Column” = Table.Pivot(#”Converted to Table”, List.Distinct(#”Converted to Table”[Name]), “Name”, “Value”)

in
#”Pivoted Column”

As you can see in the query, we have removed the actual path of the file and replaced it with the parameter called “file”. Once we select the “Done” button the table will change into a function and give you the option pass a value to the parameter in order to invoke the function, leave this alone for now. We now have all the pieces of the puzzle we just need to put it all together.

FinishedM

Step 4 – Invoke a Custom Function to Create a New Column

This is definitely the easiest and final part of setting up this design. Go ahead and select the table that was created in step 2. Now select  “Add Column” in the ribbon at the top and we will be using the “Invoke Custom Function” option. As you can see in the image below this will prompt a menu where we will need to give a name for the newly created column. In my case I have called it “Details” but this really doesn’t matter as we will see very shortly. Next we must specify which function we will be using from the drop down, mine is called “JsonFile”. Lastly we must decide what value we are going to pass to the parameter for this function. If we look back at our function we are wanting to pass through the names of the files that exist inside of our directory, which is exactly what we have in this table within the “FileName” column! After selecting the correct column and hitting okay the new column is created and we can see that is has some packed data inside of it as seen below.

InvokeFunction

All you need to do is select the icon which appears inside of the newly created “Details” column and you will be shown a list of all the columns that are packed inside. I have opted to unselect the “id” column in my example because it is a duplicate of my “FileName” column.  Also, at the very bottom you will see the option, “Use original column name as prefix”. If you leave this checked when you hit okay it will create all of the new columns and they will look like this “Details.ColumnName”. Where ColumnName will be replaced by each field that is coming from the JSON file. I personally do not use this option as you can see below.

image

Take a step back and look at the beautiful table you have created which contains all the details of the JSON files inside of your chosen directory. What is also fantastic is that moving forward when you refresh this Power BI report it will always bring back exactly what is in your directory, no additional effort needed!

Solution

With our design now fully implemented we can now hit “Close & Apply” and bring our data into the model and start visualizing and analyzing it. This design pattern can also be used in other scenarios, I personally have used this to cycle through different pages of a website in order to bring back all the information I was looking for. Functions can be an extremely powerful ally in Power BI! Have a great day!

Resources

Power BI Visualizations Follow Up

First, let me give a thank you to everyone who attended my webinar and all of your positive comments. For those of you coming to my blog and did not get a chance to attend my webinar, you can see it here. There were quite a few submitted questions and I have chosen a couple really good ones to share in this follow up video blog. There were some other questions asked that were a bit more specific to the individuals and I will be working on those and reaching out personally. I hope you enjoy the video below and I have included links afterwards in regards to the items you will see in this video. Enjoy!

-Link for Color Bling Friendly site here

– Link for creating a JSON file for a color palette here

– Blog for using UNICHAR in Power BI here

– Pragmatic Works On Demand Training here