Mincing Data - Gain insight from data

This site will be a container for all my musings about the analytical pipeline.

For this reason it will be necessary to define the Analytical Pipeline (at least define this pipeline from my point of view). From a general perspective (be aware that this is my perspective) five activities are necessary to build an analytical pipeline, these steps are

source

ingest

process

store

deliver


 The overall goal of an analytical pipeline is to answer an analytical question. To achieve this overall goal, different data sources have to be targeted and their data has to be ingested into the pipeline and properly processed. During these first steps the data ingested into the pipeline often has to be stored in one or more different data stores, each is used or its special type of usage, finally the result of the data processing has to be delivered to its users, its audience. Depending on the nature of the question, different types of processing methods and also different data stores may be used along the flow of the data throughout the pipeline.

 

I put a direction to these activities, but I also added this direction to spur your critical mind, because

  • source data is of course also stored somewhere
  • to target the source can become a challenge in the pipeline
  • to deliver processed data to the its audience can become a another challenge  if you try to deliver to mobile workforce
Successfully ingested data often gets processed more than once to answer different questions, and during its existence (inside an analytical data platform) this data will be transformed into various shapes, wrangled by different algorithms, and ingested into different "data stores".

For this reason I believe that these activities are tightly related, and the above mentioned sequence of these activities will just aid as a guidance.

 

I will use blog posts to describe how different activities are combined to answer analytical questions. In most of my upcoming blog posts I will link to different topics from the activities used in the pipeline. Each activity has its own menu and is by itself representing an essential part in analytical pipeline.

 

Hopefully this site will help its readers as much as it helps me to focus on each activity always knowing that most of the time more than one activity has to be mastered to find an answer to an analytical question.


Events and projections, using M

In a certain way this article is the sequel to “Power BI Data Modeling – Events and projections”, but here I’m using Power Query to create the same fact table, or at least almost the same table, the name is slightly different. If you are not aware what this is about, please take the time and read the article from above.

I tend to create tables using M or a combination of point-and-click and M, because these tables are treated as base tables if the model is refreshed and for this reason these tables will benefit from all the compression magic, whereas tables created using DAX are not that well compressed.

For this reason I just focus on the M part to create the fact table “Fact Projection M”. This is the M that creates the table, as you can see it’s a combination of point-and-click, but it also considers some advanced techniques (at least the techniques are advanced from my point of view 😉).
Here you will find the pbix file that now also contains the M code creating the table "Fact Projection M"

 

This is the first part of the M code I consider interesting:

// get the frequency from the Period table, of course this can also be achieved by using a Merge transformation

    // this later used to determine the projection date

    AddCol_NoOfMonth = Table.AddColumn(

        #"Namen van kolommen gewijzigd"

        , "Frequency"

        , each

            let

                p = [Period],

                f = List.First(Table.SelectRows(Period , each [Period] = p)[Frequency])

            in f

 

    ),

The interesting part is how to pass the [Period] field from the outer table by using a let expression. This allows to use the outer field [Period] to query the remote table Period.

The 2nd interesting part is this:

mehr lesen 0 Kommentare