Power BI Monthly Digest–June 2018

Power BI Monthly

Welcome back to our Power BI Monthly Digest for June! This month there are fewer new features to share than previous months, but Microsoft always listens and here are some impactful changes we’d like to highlight.

1. Column Stroke Width – Last month we discussed this in relation to the combo chart where we could do a dot plot on top of a column chart and eliminate the line to compare 2 measures, instead of a trend. This allowed more transparency and cleaned up that chart a bit. Now they’ve taken this further, so not only can we more clearly compare two measures, but we can customize and format those two measures separately with the labels.

In our demo, we used the same chart as last month and you’ll see how a few things may be confusing about which label goes to which visual. This new feature allows us to go into the Data Labels section and customize the series that you have. You simply choose the series and format it by changing the text family, size and/or color, allowing us to clearly see the difference between the two measures.

2. Donut Chart or Pie Charts – Not typically a fan favorite but there certainly are use cases for these and there have been some changes made to the Donut Chart to improve the visibility. As you know, the Donut Chart gives us the open space in the middle, possibly for an image, label or text. This new feature allows us to control the thickness of the outline that illustrates the value we’re portraying.

Also, we could always label to show the category or the data or both, but we were missing the ability to position the labels. Now you can see how this relates to the scaling of the visual; you can have those labels be inside of the donut now. So, if you want to make the outline a bit bigger and you can then set preferences where some labels are outside, and some are inside that outline. This label positioning also goes for Pie Charts.

To do this, you simply go into Shapes in the Format section and choose the setting for Inner Radius. When you make the inner radius smaller, you have more room within the visual to display the labels. You can also adjust the labels inside the Detail Labels section to change their position to inside, outside or both, depending on the size of the chart and the inner radius.

3. Filtering and Sorting in the Data View – A long asked for request from customers. It’s always been sort of ‘there’, but besides helping with calculated columns, it didn’t do much. Power BI is based on Excel technology and Data View has been a feature inside Excel, and now we can filter and sort things inside the Data View in Power BI.

More importantly, if you do filter and sort, it does not impact anything in the report view. It’s exclusive to the Data View, which can be helpful to see what’s going on and basically let us better see what we want to look at.

4. Accessibility for Visually Impaired – High Contrast Mode has been around for a while in Windows to assist those with visual impairments. Up until now, when you turned on the High Contrast View in Windows, the visuals really didn’t inherit those properties, so the visual did not change. But now when you turn this on in Windows mode, these effects will be available on your Power BI visuals.

That’s it for this month. We’d like to hear what you think; what features you’re most interested in or excited about or about things you’re looking forward to in the future. You can always submit ideas to the Power BI site. If you’ve got some good examples of how you’ve implemented these and how creative you’re being with Power BI, we’d love to hear about it—we may even showcase your example!

Be sure to tune in next month for our Power BI Monthly Digest for July!

On-Demand Training

Advertisements

Power BI Monthly Digest–May 2018

Power BI Monthly

Welcome to the May edition of our Power BI Monthly Digest series. I’m here, along with Pragmatic Works Training Director, Devin Knight, to bring you all the latest features and updates for Power BI. Our April post covered some smaller features, but the May features we’ll discuss today are some bigger, more in-depth features that we think you’ll be excited about.

1. Conditional Formatting on a Different Field– Most of you are probably already familiar with this feature. Conditional formatting came about in the pursuit of making things easier to understand and read, but some new additions take this to another level. The basis of conditional formatting is, let’s say you choose a column and you want to display a color based on performance, profits for instance, with higher profit amounts in green and lower amounts in red.

Now, we can take a column and define it by a different color pattern. A traditional matrix typically contains a lot of information to take in. With conditional formatting and this new feature that added a field called ‘Color Based On’, we can set up color by rules and value ranges and bring a boring report to life and make it much easier for users to digest our information.

But wait, there’s more! Before we had that dropdown capability only for numeric values. But now we can do this for text and date. We can simply click on a date or text field, get the dropdown with the conditional formatting option and choose our color, adding more ability to easily see what we want to see in our report.

2. Sync Slicers – Advanced Options – Of course, we’re probably going to have multiple pages in a report and sync slicer was previously released giving us the ability to synchronize a slicer across multiple pages, instead of adding those in and then having the individuality of them dictate what’s happening on each page.

Now, we have an ‘advanced options’ dropdown within that section that gives us the capability to apply a group name to it. So, the idea of having one slicer that appears on multiple pages that we can configure, and the filters go across the other pages, we can now associate as many slicers as we like into the picture and have those be part of that same group. This helps on the development side, as well as the consumption side by having different slicers all joined together by that grouping.

3. Zero Stroke Width on Line Charts – This next feature is about visualizing your data on combo (column and line charts) and simply line charts. Line charts are typically about showing trends in your data or things over time. But what if you don’t have a trend element to your data, but want to use the line for points that exist at each exchange of data there or to compare two measures?

This new feature gives you the ability to set the line width to zero, therefore removing the line element using the Stroke Width property in the Format area, so you can have points set up on your visual. We can turn on a visualization feature called ‘Show Marks” and choose the size, shape and color of our points. In some use cases a line really serves no purpose on a report, this feature allows us to create a cleaner look that is more easily consumable by the end user.

4. Drillthrough on a Measure – Another fan favorite that has some new changes added. First off, drillthrough now has its own section instead of having to go into Filters and find drillthrough, where you could select a field and drop that in there. I used to view this as a baseball catcher waiting to receive a filter for the category you’ve told it, so if I say ‘region’, I’m waiting to catch a value of region.

Now we can go into any page, choose a data point and right click on it. We can pass this through our drillthrough report and the filter context, anything related to region (to stick with my example above) will be passed along and said report will be filtered.

But what if that point is from a matrix with a lot of cross sections built into it? Everything other than region would be ignored, unless we took time and effort to add in each individual field into the drillthrough area. With this new feature, if I want to keep and see all the fields I need and accept filter context that goes along with the point I selected, I simply need to click the ‘on’ button in the drillthrough section. Yes, it’s as easy as turning on a button.

There’s also another feature added to drillthrough that allows you to do drillthrough on measures, whether implicit or explicit and you’ll see the summarized values of these.

5. New From Web Experience (Preview) – This next feature is still in preview. You’re likely not going to base an entire data center around the Web Connector, but you might use the Web Connection to pull in data from the web to supplement a data source you already have, maybe internal company data, to help you make a decision.

I would describe the usability enhancements that they’ve made as the way all of us as users always wanted the web to work. When I use this, I love it but there are times when the URL that I input and how it’s reading it, doesn’t give me exactly what I’m looking for. There is a web preview tab where I could see that I wanted, maybe a table perhaps, but I couldn’t get to it. This new feature allows us to do that.

It sort of takes advantage of a feature called ‘Add Columns from Examples’. They’ve taken that element and you can now create a logic that allows the web source to pull information out of that URL. By providing an example of what you want the data to look like from a web source, it will pull out that specific area of the data instead of what Power BI thinks you’re looking for. It gives us control in the web source to choose what we want to see.

6. Incremental Refresh (Preview) – The last feature has probably been the most awaited by many Power BI users. If you’ve got lots of data and have been thinking, is there a way to incrementally load or refresh it, rather than loading millions or billions of rows every time I refresh? Now there is.

In traditional developer fashion, it will follow the same process of running SSIS packages. We can say, look at this column and I want to grab information based off date ranges we select. This is currently in preview and only for those who are Power BI Premium members, but this is it’s first iteration and we’re excited to see this make its introduction into Power BI, especially from an efficiency perspective.

These are some great, cool new features and some that we’ve been waiting on. Watch the video for quick demos of all these features so you can incorporate them into your reports right away. Be sure to subscribe to Pragmatic Works YouTube channel to view all our Power BI updates throughout the month. And we’ll be back next month with all that’s new and exciting with Power BI!

On-Demand Training

Power BI Monthly Digest–April 2018

Power BI Monthly

Welcome to the first edition of our new series called Power BI Monthly Digest. I’ll be your host, along with Training Director, Devin Knight, and we’ll be exploring the top updates/features for Power BI released each month. We will hone in on the high impact releases that we know you’ll be excited about!

1. New DAX function: COMBINEVALUES() – This new DAX function allows multi part keys to be handled in a more efficient way when using Direct Query. Using COMBINEVALUES generally results in a more efficient SQL query. Another use case is if you want to join a set of values with a column delimiter without having a set of nested CONCATENATE() functions.

2. Numeric Slicer – Has been in preview but is finally readily available. With this feature, if you have a numeric value that you want a filter on, you can bring the slicer in and depending on the data type that you select, it will allow you to have different contextual types of filters.

For example, if you want to bring in a spend value, it will give you a numeric slicer, and this new feature will give you a slider that you can move to change the range of values that you’re looking at. You can slide that range anywhere on the scale and it will effect everything in your report based on the selection you made with the slider, so you can quickly and easily change and effect your reports.

3. Q&A Explorer – The Q&A feature has been very popular since introduced, and this new update (although still in preview) takes integrating the Q&A experience in your reports for your users even further.

The Q&A feature allows users to double click anywhere in the background of your report and a question box will appear, where they can type in a question about the data and an answer will immediately pop up. Now the idea of Buttons has been added with Q&A Explorer.

You can now have a set of pre-suggested questions, giving guidance to your users. You can add a button with text that tells them “click here for suggested questions”, which when they click it will launch a new window with the suggested questions that you have chosen. If you put the initial work into creating these questions, you can even create multiple buttons with different categories, this will create more effective navigation for your users.

4. Linguistic Schema (A New Enhancement to Synonyms) – With the Synonyms feature, you can go in and create a list of synonyms inside your relationship view, so if a user types in any of the pre-created synonyms, they will get the information they need from the correct column returned to them.

But developers may not know all the synonyms that people are going to need. With Linguistic Schema, you can export the schema from your model, save it and hand that file over to someone who will know what people may ask, a business user for example. They can type in any synonyms for each column name, so these will be leveraged to report back the Q&A word texts. This file can then be re-imported to that schema file in the report.

We’re excited about these top features, as well as others like the new Custom Themes feature (Preview) and the Organizational Container for Visuals feature. All these features give you the capability to set up some standardization across your organization for development.

We’ve given you a quick view with our demo, so you can incorporate these new features right away in your Power BI reports. We hope you’re as excited as we are to use them! We’ll be back each month to share our insights of what’s new and exciting in Power BI.

On-Demand Training

Power BI – Problem, Design, Solution (Folder Source Option for JSON Files)

Hello Everyone! In this episode of  the “Problem, Design, Solution” series we will be talking about the “Folder” option inside the sources display for Power BI Desktop. This is a fantastic feature that works so very well with your standard Delimited/CSV (Comma Separated Value) files, making life easier by importing whatever files you have in a directory. The only pre-requisites that are needed in order to leverage this feature are that the files must be of the same file type and structure (same number of columns and names). There is one issue though, even when meetings these minimal standards sometimes the “Folder” option just doesn’t load the files like we would want, this is the case when we talk about JSON files. We will go on to explore how we can work around this issue by basically manipulating what the “Folder” option achieves but taking more control over the process. For those of you who appreciate a more visual approach there is a detailed video at the end of this blog that you can skip to. Even so this will be a very detailed and visual written companion to the video. I hope you enjoy!

Problem

Traditionally the “Folder” source option gracefully handles all of your delimited text files that are stored in a directory. This makes it extremely easy and quick to load that data into Power BI. Unfortunately there are some common formats or structures  that are used in text files that this option does not handle well, namely those in the JSON format. I will not be going into depth on an explanation of the JSON format as this whole blog is about how to load these types of text files into Power BI so it is assumed that you know what I mean when I say JSON. As you can see in the screenshot below, using the folder option is very simple but the outcome of what is achieved is not desirable, we can immediately see that we need to pivot the information to better suit our needs amongst other things.

FolderJSON

As you can see in the image below, after the “Folder” option performs all of its operations to bring the information into one table you can see all the steps in the queries pane. There is a way that you can actually move through these steps and get the output the way you want it but this can be time consuming. The reason I do not recommend this is because within the steps that are being taken there is an order of dependencies and usually when you make one change then you need to address those changes in another step that was dependent on the last one. Instead let’s talk about a way that we can take more control of this process from the beginning and still take advantage of the idea of being able to pull together all the data that exists in multiple files in a folder!

CombineFile

Design

Since we now understand how awesome the “Folder” option is inside of Power BI we need to figure out how to make this concept work for our JSON files. To start we can borrow from what we see in the previous screenshot. We won’t copy everything that the “Folder” option does but definitely a large majority, lets break it down:

1. We can see that there is an item that is pulling a sample file

2. We also can note that there is a table where transforms are being applied

3. Lastly the “Folder” option is using a function to pass through the values of all files found in the specified directory

So let’s now take these ideas and do them ourselves in four easy steps!

Step 1 – Point to One of The JSON Files In The Directory In order to Apply All the Transforms We Want

The way we will achieve this is by first locating the “JSON” option inside of the list of source options in Power BI. Once selected we will simply navigate to the directory we eventually want to load and select any of the JSON files that are there. It really doesn’t matter which of the files you choose because we have already discussed that the files existent here should already be of the same file type and structure. Once we have pointed to the file in question and hit “Okay” the file will be loaded into the Query Editor and we really only have one option here, to convert the data into a table format as seen below.

JSONFile

Once it has been converted to a table you can now do whatever transformations that need to occur, in the video example the only step we will take is to pivot the table and choose not to aggregate any of the values as seen below once again. Also I have gone ahead and renamed the table, “JsonFile”. We have now completed step one by pointing to a sample file and applying whatever necessary transforms. Now all we need to do is find a way to get a list of all files that exist in the folder that we want to process…

Step 2 – Use the Folder Option to Point to the Directory in Question

WHAT?!?!?! We just talked about doing this in the problem portion just above, so why would we want to do this! The answer is simple, we have already decided that the data isn’t in the right format when using the “Folder” option but there is one thing it does do right. The first column that is created is meant to capture the source of where the data was brought from, which in our case is the file name and that is exactly what we want. So after we select the folder to import from we can delete all columns except for “Source.Name”. We aren’t done just yet though because as you will quickly see the filename is repeated for however many columns we had in the file but Power BI provides us an easy solution. Go ahead and right-click the “Source.Name” column and you will see that there is an option to “Remove Duplicates”, let’s also rename this column to something friendly like “Filename” as we see below. Also rename the table to something friendly if needed, in my example I have left it as “Files”.

FileName

Voila! Now we have a distinct list of filenames that appear in the directory. As well, the way this connection is setup, if files are added and/or removed it will be able to keep up with these changes. So every time we refresh the connections we will have an accurate list of what is in the designated folder. Step number 2 now has a checkmark next to it, but now we need to figure out how to tie both these steps together. We want all the transforms we setup in step 1 to apply to whatever files have been discovered in step 2. The answer, use a function!

Step 3 – Create a Function of the First Table  Where We Transformed Our Sample JSON File

Here we actually need to re-visit the first table we created where we pivoted our table. We are going to modify the M query that was written in the background for us for this table. As you can see in the screenshot below there is a line of M that indicates the type of source connection that was used and the specific path the file we chose.

UseMQuery

This is where we are going to make a change but we must first declare a variable/parameter to the beginning of this query. The syntax will be as follows:

(file as text) =>


let
Source = Json.Document(File.Contents(“C:\Users\Manuel Quintana\Desktop\PDS JSON\Files\”&(file))),
#”Converted to Table” = Record.ToTable(Source),
#”Pivoted Column” = Table.Pivot(#”Converted to Table”, List.Distinct(#”Converted to Table”[Name]), “Name”, “Value”)

in
#”Pivoted Column”

As you can see in the query, we have removed the actual path of the file and replaced it with the parameter called “file”. Once we select the “Done” button the table will change into a function and give you the option pass a value to the parameter in order to invoke the function, leave this alone for now. We now have all the pieces of the puzzle we just need to put it all together.

FinishedM

Step 4 – Invoke a Custom Function to Create a New Column

This is definitely the easiest and final part of setting up this design. Go ahead and select the table that was created in step 2. Now select  “Add Column” in the ribbon at the top and we will be using the “Invoke Custom Function” option. As you can see in the image below this will prompt a menu where we will need to give a name for the newly created column. In my case I have called it “Details” but this really doesn’t matter as we will see very shortly. Next we must specify which function we will be using from the drop down, mine is called “JsonFile”. Lastly we must decide what value we are going to pass to the parameter for this function. If we look back at our function we are wanting to pass through the names of the files that exist inside of our directory, which is exactly what we have in this table within the “FileName” column! After selecting the correct column and hitting okay the new column is created and we can see that is has some packed data inside of it as seen below.

InvokeFunction

All you need to do is select the icon which appears inside of the newly created “Details” column and you will be shown a list of all the columns that are packed inside. I have opted to unselect the “id” column in my example because it is a duplicate of my “FileName” column.  Also, at the very bottom you will see the option, “Use original column name as prefix”. If you leave this checked when you hit okay it will create all of the new columns and they will look like this “Details.ColumnName”. Where ColumnName will be replaced by each field that is coming from the JSON file. I personally do not use this option as you can see below.

image

Take a step back and look at the beautiful table you have created which contains all the details of the JSON files inside of your chosen directory. What is also fantastic is that moving forward when you refresh this Power BI report it will always bring back exactly what is in your directory, no additional effort needed!

Solution

With our design now fully implemented we can now hit “Close & Apply” and bring our data into the model and start visualizing and analyzing it. This design pattern can also be used in other scenarios, I personally have used this to cycle through different pages of a website in order to bring back all the information I was looking for. Functions can be an extremely powerful ally in Power BI! Have a great day!

Resources

Power BI Visualizations Follow Up

First, let me give a thank you to everyone who attended my webinar and all of your positive comments. For those of you coming to my blog and did not get a chance to attend my webinar, you can see it here. There were quite a few submitted questions and I have chosen a couple really good ones to share in this follow up video blog. There were some other questions asked that were a bit more specific to the individuals and I will be working on those and reaching out personally. I hope you enjoy the video below and I have included links afterwards in regards to the items you will see in this video. Enjoy!

-Link for Color Bling Friendly site here

– Link for creating a JSON file for a color palette here

– Blog for using UNICHAR in Power BI here

– Pragmatic Works On Demand Training here

Introduction to SSIS – Building Your First Package

Greetings everyone. I recently delivered a live webinar covering some introductory items within SSIS. In this meeting I walked through some of the common items that are used in packages at the Junior level but as well are commonly used in any level of package. I do talk briefly about the design process and a little on performance related items. If you missed the live webinar it can be viewed here. You will need to create an account to view this video but then you will have access to all the other free content we have available. Any feedback would be appreciated and if you have any question do not hesitate to ask. I do recommend reading my blog on the sort transform after you watch the webinar which takes a focused look at the component and its performance on packages. You can read that blog here.

The topic of Introductory to SSIS was a bit difficult for me to present only because there is so much in SSIS. As well some of the most basic components can be used in more advanced ways. I am as well learning that there is usually more than one way to achieve a goal in SSIS, initially you need to find the way that is easiest for you. As you increase in ability though you will have to leave you comfort zone and try new methods to ensure that you are developing packages in the most efficient way possible, this is known as performance tuning. If you have not already done so, I do recommend looking at Mitchell Pearson’s webinar on performance tuning here.

Task Factory Secure FTP – Using the Get List of Files Option

Greetings everyone. Today I will be doing a blog for a specific component and specific action with the Secure FTP. The “Get List of Files” action for this component utilizes the object variable. If you did not see my blog on object variables please go here. Quite often we have clients that would like to use this option for the Secure FTP component in Task Factory but they are unaccustomed to using object variables. In the video associated with the blog I will show you how to properly setup this component using a couple different options. The main item of this blog is that the object variable must be parsed out and after that is accomplished there are a number of ways you can view or write this information to a table. Just underneath the video you will see the script used for the two script tasks seen in the package. Shortly after this blog I will be doing a small write up on how to leverage our File Filter option correctly which is essential if you are using the “Download Directory from Server” option and you need to just pull certain items. This File Filter option is also available for the “Get List of Files” option.

**It should be known that all these scripts are written in C#. Also any items italicized and in bold represent items that will change depending on the user**

First Script which creates a message box to detect how many rows exist within the arraylist housed in the object variable:

MessageBoxScript

var files = (ArrayList)Dts.Variables[“User::objFilesList“].Value;

MessageBox.Show(files.count.ToString());

Dts.TaskResult = (int)ScriptResults.Success;

Here is the second script that was used in the the For Each Loop container to also use a message box to display the file name:

MessageBoxScript2

MessageBox.Show(Dts.Variables[“strFileName”].Value.ToString());

Dts. TaskResult = (int)ScriptResults.Success;